2025-07-06 19:20:10.798979 | Job console starting 2025-07-06 19:20:10.814062 | Updating git repos 2025-07-06 19:20:10.928585 | Cloning repos into workspace 2025-07-06 19:20:11.383893 | Restoring repo states 2025-07-06 19:20:11.441515 | Merging changes 2025-07-06 19:20:11.441537 | Checking out repos 2025-07-06 19:20:11.874273 | Preparing playbooks 2025-07-06 19:20:13.416520 | Running Ansible setup 2025-07-06 19:20:19.885030 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-06 19:20:20.924479 | 2025-07-06 19:20:20.924696 | PLAY [Base pre] 2025-07-06 19:20:20.942264 | 2025-07-06 19:20:20.942382 | TASK [Setup log path fact] 2025-07-06 19:20:20.971212 | orchestrator | ok 2025-07-06 19:20:21.010768 | 2025-07-06 19:20:21.012081 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-06 19:20:21.057746 | orchestrator | ok 2025-07-06 19:20:21.098283 | 2025-07-06 19:20:21.098392 | TASK [emit-job-header : Print job information] 2025-07-06 19:20:21.169919 | # Job Information 2025-07-06 19:20:21.170079 | Ansible Version: 2.16.14 2025-07-06 19:20:21.170115 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-07-06 19:20:21.170149 | Pipeline: post 2025-07-06 19:20:21.170173 | Executor: 521e9411259a 2025-07-06 19:20:21.170195 | Triggered by: https://github.com/osism/testbed/commit/6b1483ea11ea6b4bb31b3b1a68fb04362e76bb9a 2025-07-06 19:20:21.170218 | Event ID: 37822248-5a9e-11f0-96d6-95cb46b247a9 2025-07-06 19:20:21.176934 | 2025-07-06 19:20:21.177043 | LOOP [emit-job-header : Print node information] 2025-07-06 19:20:21.315615 | orchestrator | ok: 2025-07-06 19:20:21.315829 | orchestrator | # Node Information 2025-07-06 19:20:21.315865 | orchestrator | Inventory Hostname: orchestrator 2025-07-06 19:20:21.315889 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-06 19:20:21.315911 | orchestrator | Username: zuul-testbed01 2025-07-06 19:20:21.315931 | orchestrator | Distro: Debian 12.11 2025-07-06 19:20:21.315959 | orchestrator | Provider: static-testbed 2025-07-06 19:20:21.315985 | orchestrator | Region: 2025-07-06 19:20:21.316005 | orchestrator | Label: testbed-orchestrator 2025-07-06 19:20:21.316025 | orchestrator | Product Name: OpenStack Nova 2025-07-06 19:20:21.316044 | orchestrator | Interface IP: 81.163.193.140 2025-07-06 19:20:21.339041 | 2025-07-06 19:20:21.339149 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-06 19:20:21.810918 | orchestrator -> localhost | changed 2025-07-06 19:20:21.817316 | 2025-07-06 19:20:21.817412 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-06 19:20:23.046381 | orchestrator -> localhost | changed 2025-07-06 19:20:23.064457 | 2025-07-06 19:20:23.064592 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-06 19:20:23.580267 | orchestrator -> localhost | ok 2025-07-06 19:20:23.587115 | 2025-07-06 19:20:23.587288 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-06 19:20:23.607761 | orchestrator | ok 2025-07-06 19:20:23.626611 | orchestrator | included: /var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-06 19:20:23.636674 | 2025-07-06 19:20:23.636770 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-06 19:20:26.312851 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-06 19:20:26.313072 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/work/1b8c43777a9244299d5583d25d5cd521_id_rsa 2025-07-06 19:20:26.313110 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/work/1b8c43777a9244299d5583d25d5cd521_id_rsa.pub 2025-07-06 19:20:26.313137 | orchestrator -> localhost | The key fingerprint is: 2025-07-06 19:20:26.313165 | orchestrator -> localhost | SHA256:7hhyEjRfQHbh9gbYxHifCwz/jWaJbM0Wa/wDulqneHo zuul-build-sshkey 2025-07-06 19:20:26.313188 | orchestrator -> localhost | The key's randomart image is: 2025-07-06 19:20:26.313228 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-06 19:20:26.313250 | orchestrator -> localhost | | .+o+. | 2025-07-06 19:20:26.313272 | orchestrator -> localhost | | .oBo | 2025-07-06 19:20:26.313292 | orchestrator -> localhost | | o .=*. . | 2025-07-06 19:20:26.313312 | orchestrator -> localhost | | . o o+o+ | 2025-07-06 19:20:26.313332 | orchestrator -> localhost | | . ..SBoB | 2025-07-06 19:20:26.313359 | orchestrator -> localhost | | . .+./ . | 2025-07-06 19:20:26.313380 | orchestrator -> localhost | | o o.o*.o | 2025-07-06 19:20:26.313400 | orchestrator -> localhost | | + *Eo o | 2025-07-06 19:20:26.313420 | orchestrator -> localhost | | ==+. . | 2025-07-06 19:20:26.313441 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-06 19:20:26.313489 | orchestrator -> localhost | ok: Runtime: 0:00:01.846624 2025-07-06 19:20:26.321171 | 2025-07-06 19:20:26.321304 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-06 19:20:26.379696 | orchestrator | ok 2025-07-06 19:20:26.390820 | orchestrator | included: /var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-06 19:20:26.411615 | 2025-07-06 19:20:26.411754 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-06 19:20:26.441235 | orchestrator | skipping: Conditional result was False 2025-07-06 19:20:26.450076 | 2025-07-06 19:20:26.450203 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-06 19:20:27.522645 | orchestrator | changed 2025-07-06 19:20:27.534393 | 2025-07-06 19:20:27.534534 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-06 19:20:27.886618 | orchestrator | ok 2025-07-06 19:20:27.923448 | 2025-07-06 19:20:27.924141 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-06 19:20:28.430263 | orchestrator | ok 2025-07-06 19:20:28.436691 | 2025-07-06 19:20:28.436813 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-06 19:20:28.906577 | orchestrator | ok 2025-07-06 19:20:28.912811 | 2025-07-06 19:20:28.912903 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-06 19:20:28.945720 | orchestrator | skipping: Conditional result was False 2025-07-06 19:20:28.951314 | 2025-07-06 19:20:28.951402 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-06 19:20:29.923683 | orchestrator -> localhost | changed 2025-07-06 19:20:29.939450 | 2025-07-06 19:20:29.939566 | TASK [add-build-sshkey : Add back temp key] 2025-07-06 19:20:30.506302 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/work/1b8c43777a9244299d5583d25d5cd521_id_rsa (zuul-build-sshkey) 2025-07-06 19:20:30.506528 | orchestrator -> localhost | ok: Runtime: 0:00:00.018735 2025-07-06 19:20:30.513615 | 2025-07-06 19:20:30.513718 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-06 19:20:31.052898 | orchestrator | ok 2025-07-06 19:20:31.058623 | 2025-07-06 19:20:31.063668 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-06 19:20:31.100174 | orchestrator | skipping: Conditional result was False 2025-07-06 19:20:31.243578 | 2025-07-06 19:20:31.246017 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-06 19:20:31.728656 | orchestrator | ok 2025-07-06 19:20:31.769836 | 2025-07-06 19:20:31.769955 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-06 19:20:31.811796 | orchestrator | ok 2025-07-06 19:20:31.819925 | 2025-07-06 19:20:31.820026 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-06 19:20:32.598602 | orchestrator -> localhost | ok 2025-07-06 19:20:32.621196 | 2025-07-06 19:20:32.621727 | TASK [validate-host : Collect information about the host] 2025-07-06 19:20:34.530799 | orchestrator | ok 2025-07-06 19:20:34.563450 | 2025-07-06 19:20:34.563617 | TASK [validate-host : Sanitize hostname] 2025-07-06 19:20:34.713444 | orchestrator | ok 2025-07-06 19:20:34.723231 | 2025-07-06 19:20:34.723378 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-06 19:20:35.982846 | orchestrator -> localhost | changed 2025-07-06 19:20:35.994896 | 2025-07-06 19:20:35.995040 | TASK [validate-host : Collect information about zuul worker] 2025-07-06 19:20:36.580770 | orchestrator | ok 2025-07-06 19:20:36.587860 | 2025-07-06 19:20:36.587998 | TASK [validate-host : Write out all zuul information for each host] 2025-07-06 19:20:37.392858 | orchestrator -> localhost | changed 2025-07-06 19:20:37.404329 | 2025-07-06 19:20:37.404464 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-06 19:20:37.701136 | orchestrator | ok 2025-07-06 19:20:37.717442 | 2025-07-06 19:20:37.717621 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-06 19:21:11.077596 | orchestrator | changed: 2025-07-06 19:21:11.077830 | orchestrator | .d..t...... src/ 2025-07-06 19:21:11.077866 | orchestrator | .d..t...... src/github.com/ 2025-07-06 19:21:11.077891 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-06 19:21:11.077914 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-06 19:21:11.077935 | orchestrator | RedHat.yml 2025-07-06 19:21:11.089133 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-06 19:21:11.089151 | orchestrator | RedHat.yml 2025-07-06 19:21:11.089203 | orchestrator | = 1.53.0"... 2025-07-06 19:21:27.384328 | orchestrator | 19:21:27.384 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-06 19:21:28.340279 | orchestrator | 19:21:28.340 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-06 19:21:29.245603 | orchestrator | 19:21:29.245 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-06 19:21:30.220821 | orchestrator | 19:21:30.220 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-07-06 19:21:31.411532 | orchestrator | 19:21:31.411 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-07-06 19:21:32.242423 | orchestrator | 19:21:32.242 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-06 19:21:33.146755 | orchestrator | 19:21:33.145 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-06 19:21:33.146824 | orchestrator | 19:21:33.145 STDOUT terraform: Providers are signed by their developers. 2025-07-06 19:21:33.146832 | orchestrator | 19:21:33.145 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-06 19:21:33.146839 | orchestrator | 19:21:33.146 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-06 19:21:33.146845 | orchestrator | 19:21:33.146 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-06 19:21:33.146854 | orchestrator | 19:21:33.146 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-06 19:21:33.146864 | orchestrator | 19:21:33.146 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-06 19:21:33.146869 | orchestrator | 19:21:33.146 STDOUT terraform: you run "tofu init" in the future. 2025-07-06 19:21:33.146875 | orchestrator | 19:21:33.146 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-06 19:21:33.146881 | orchestrator | 19:21:33.146 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-06 19:21:33.146887 | orchestrator | 19:21:33.146 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-06 19:21:33.146892 | orchestrator | 19:21:33.146 STDOUT terraform: should now work. 2025-07-06 19:21:33.146898 | orchestrator | 19:21:33.146 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-06 19:21:33.146904 | orchestrator | 19:21:33.146 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-06 19:21:33.146910 | orchestrator | 19:21:33.146 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-06 19:21:33.268057 | orchestrator | 19:21:33.267 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-06 19:21:33.268156 | orchestrator | 19:21:33.267 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-06 19:21:33.477132 | orchestrator | 19:21:33.476 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-06 19:21:33.477199 | orchestrator | 19:21:33.477 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-06 19:21:33.477311 | orchestrator | 19:21:33.477 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-06 19:21:33.477351 | orchestrator | 19:21:33.477 STDOUT terraform: for this configuration. 2025-07-06 19:21:33.652122 | orchestrator | 19:21:33.651 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-06 19:21:33.652192 | orchestrator | 19:21:33.652 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-06 19:21:33.748403 | orchestrator | 19:21:33.748 STDOUT terraform: ci.auto.tfvars 2025-07-06 19:21:33.751010 | orchestrator | 19:21:33.750 STDOUT terraform: default_custom.tf 2025-07-06 19:21:33.927501 | orchestrator | 19:21:33.927 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-06 19:21:34.907057 | orchestrator | 19:21:34.906 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-06 19:21:35.413393 | orchestrator | 19:21:35.410 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-06 19:21:35.675666 | orchestrator | 19:21:35.674 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-06 19:21:35.675749 | orchestrator | 19:21:35.674 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-06 19:21:35.675760 | orchestrator | 19:21:35.674 STDOUT terraform:  + create 2025-07-06 19:21:35.675769 | orchestrator | 19:21:35.674 STDOUT terraform:  <= read (data resources) 2025-07-06 19:21:35.675777 | orchestrator | 19:21:35.674 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-06 19:21:35.675785 | orchestrator | 19:21:35.674 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-06 19:21:35.675793 | orchestrator | 19:21:35.674 STDOUT terraform:  # (config refers to values not yet known) 2025-07-06 19:21:35.675801 | orchestrator | 19:21:35.674 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-06 19:21:35.675808 | orchestrator | 19:21:35.674 STDOUT terraform:  + checksum = (known after apply) 2025-07-06 19:21:35.675815 | orchestrator | 19:21:35.674 STDOUT terraform:  + created_at = (known after apply) 2025-07-06 19:21:35.675823 | orchestrator | 19:21:35.674 STDOUT terraform:  + file = (known after apply) 2025-07-06 19:21:35.675830 | orchestrator | 19:21:35.674 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.675837 | orchestrator | 19:21:35.674 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.675866 | orchestrator | 19:21:35.674 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-06 19:21:35.675874 | orchestrator | 19:21:35.674 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-06 19:21:35.675881 | orchestrator | 19:21:35.674 STDOUT terraform:  + most_recent = true 2025-07-06 19:21:35.675889 | orchestrator | 19:21:35.674 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.675896 | orchestrator | 19:21:35.674 STDOUT terraform:  + protected = (known after apply) 2025-07-06 19:21:35.675903 | orchestrator | 19:21:35.674 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.675911 | orchestrator | 19:21:35.674 STDOUT terraform:  + schema = (known after apply) 2025-07-06 19:21:35.675918 | orchestrator | 19:21:35.674 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-06 19:21:35.675925 | orchestrator | 19:21:35.674 STDOUT terraform:  + tags = (known after apply) 2025-07-06 19:21:35.675932 | orchestrator | 19:21:35.674 STDOUT terraform:  + updated_at = (known after apply) 2025-07-06 19:21:35.675939 | orchestrator | 19:21:35.674 STDOUT terraform:  } 2025-07-06 19:21:35.675951 | orchestrator | 19:21:35.674 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-06 19:21:35.675958 | orchestrator | 19:21:35.674 STDOUT terraform:  # (config refers to values not yet known) 2025-07-06 19:21:35.675965 | orchestrator | 19:21:35.674 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-06 19:21:35.675973 | orchestrator | 19:21:35.674 STDOUT terraform:  + checksum = (known after apply) 2025-07-06 19:21:35.675987 | orchestrator | 19:21:35.674 STDOUT terraform:  + created_at = (known after apply) 2025-07-06 19:21:35.675994 | orchestrator | 19:21:35.674 STDOUT terraform:  + file = (known after apply) 2025-07-06 19:21:35.676001 | orchestrator | 19:21:35.674 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.676008 | orchestrator | 19:21:35.675 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.676015 | orchestrator | 19:21:35.675 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-06 19:21:35.676022 | orchestrator | 19:21:35.675 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-06 19:21:35.676038 | orchestrator | 19:21:35.675 STDOUT terraform:  + most_recent = true 2025-07-06 19:21:35.676046 | orchestrator | 19:21:35.675 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.676052 | orchestrator | 19:21:35.675 STDOUT terraform:  + protected = (known after apply) 2025-07-06 19:21:35.676059 | orchestrator | 19:21:35.675 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.676082 | orchestrator | 19:21:35.675 STDOUT terraform:  + schema = (known after apply) 2025-07-06 19:21:35.676090 | orchestrator | 19:21:35.675 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-06 19:21:35.676096 | orchestrator | 19:21:35.675 STDOUT terraform:  + tags = (known after apply) 2025-07-06 19:21:35.676103 | orchestrator | 19:21:35.675 STDOUT terraform:  + updated_at = (known after apply) 2025-07-06 19:21:35.676109 | orchestrator | 19:21:35.675 STDOUT terraform:  } 2025-07-06 19:21:35.676116 | orchestrator | 19:21:35.675 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-06 19:21:35.676128 | orchestrator | 19:21:35.675 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-06 19:21:35.676135 | orchestrator | 19:21:35.675 STDOUT terraform:  + content = (known after apply) 2025-07-06 19:21:35.676142 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:35.676149 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:35.676155 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:35.676162 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:35.676169 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:35.676176 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:35.676182 | orchestrator | 19:21:35.675 STDOUT terraform:  + directory_permission = "0777" 2025-07-06 19:21:35.676189 | orchestrator | 19:21:35.675 STDOUT terraform:  + file_permission = "0644" 2025-07-06 19:21:35.676196 | orchestrator | 19:21:35.675 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-06 19:21:35.676202 | orchestrator | 19:21:35.675 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.676209 | orchestrator | 19:21:35.675 STDOUT terraform:  } 2025-07-06 19:21:35.676216 | orchestrator | 19:21:35.675 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-06 19:21:35.676223 | orchestrator | 19:21:35.675 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-06 19:21:35.676229 | orchestrator | 19:21:35.675 STDOUT terraform:  + content = (known after apply) 2025-07-06 19:21:35.676238 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:35.676248 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:35.676258 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:35.676269 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:35.676279 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:35.676290 | orchestrator | 19:21:35.675 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:35.676300 | orchestrator | 19:21:35.675 STDOUT terraform:  + directory_permission = "0777" 2025-07-06 19:21:35.676310 | orchestrator | 19:21:35.675 STDOUT terraform:  + file_permission = "0644" 2025-07-06 19:21:35.676316 | orchestrator | 19:21:35.675 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-06 19:21:35.676323 | orchestrator | 19:21:35.676 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.676330 | orchestrator | 19:21:35.676 STDOUT terraform:  } 2025-07-06 19:21:35.676341 | orchestrator | 19:21:35.676 STDOUT terraform:  # local_file.inventory will be created 2025-07-06 19:21:35.676352 | orchestrator | 19:21:35.676 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-06 19:21:35.676359 | orchestrator | 19:21:35.676 STDOUT terraform:  + content = (known after apply) 2025-07-06 19:21:35.676377 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:35.676384 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:35.676390 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:35.676397 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:35.676403 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:35.676410 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:35.676416 | orchestrator | 19:21:35.676 STDOUT terraform:  + directory_permission = "0777" 2025-07-06 19:21:35.676423 | orchestrator | 19:21:35.676 STDOUT terraform:  + file_permission = "0644" 2025-07-06 19:21:35.676432 | orchestrator | 19:21:35.676 STDOUT terraform:  + filename = "inventory.ci" 2025-07-06 19:21:35.676439 | orchestrator | 19:21:35.676 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.676445 | orchestrator | 19:21:35.676 STDOUT terraform:  } 2025-07-06 19:21:35.676475 | orchestrator | 19:21:35.676 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-06 19:21:35.676486 | orchestrator | 19:21:35.676 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-06 19:21:35.678700 | orchestrator | 19:21:35.676 STDOUT terraform:  + content = (sensitive value) 2025-07-06 19:21:35.678745 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:35.678753 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:35.678760 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:35.678767 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:35.678773 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:35.678780 | orchestrator | 19:21:35.676 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:35.678787 | orchestrator | 19:21:35.676 STDOUT terraform:  + directory_permission = "0700" 2025-07-06 19:21:35.678793 | orchestrator | 19:21:35.676 STDOUT terraform:  + file_permission = "0600" 2025-07-06 19:21:35.678800 | orchestrator | 19:21:35.676 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-06 19:21:35.678806 | orchestrator | 19:21:35.676 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.678813 | orchestrator | 19:21:35.676 STDOUT terraform:  } 2025-07-06 19:21:35.678820 | orchestrator | 19:21:35.676 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-06 19:21:35.678827 | orchestrator | 19:21:35.676 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-06 19:21:35.678833 | orchestrator | 19:21:35.676 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.678840 | orchestrator | 19:21:35.676 STDOUT terraform:  } 2025-07-06 19:21:35.678847 | orchestrator | 19:21:35.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-06 19:21:35.678876 | orchestrator | 19:21:35.676 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-06 19:21:35.678883 | orchestrator | 19:21:35.676 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.678890 | orchestrator | 19:21:35.676 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.678897 | orchestrator | 19:21:35.677 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.678910 | orchestrator | 19:21:35.677 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.678916 | orchestrator | 19:21:35.677 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.678923 | orchestrator | 19:21:35.677 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-06 19:21:35.678929 | orchestrator | 19:21:35.677 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.678936 | orchestrator | 19:21:35.677 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.678942 | orchestrator | 19:21:35.677 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.678949 | orchestrator | 19:21:35.677 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.678956 | orchestrator | 19:21:35.677 STDOUT terraform:  } 2025-07-06 19:21:35.678962 | orchestrator | 19:21:35.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-06 19:21:35.678969 | orchestrator | 19:21:35.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:35.678975 | orchestrator | 19:21:35.677 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.678982 | orchestrator | 19:21:35.677 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.678988 | orchestrator | 19:21:35.677 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.678995 | orchestrator | 19:21:35.677 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.679001 | orchestrator | 19:21:35.677 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.679016 | orchestrator | 19:21:35.677 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-06 19:21:35.679023 | orchestrator | 19:21:35.677 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.679030 | orchestrator | 19:21:35.677 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.679037 | orchestrator | 19:21:35.677 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.679043 | orchestrator | 19:21:35.677 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.679050 | orchestrator | 19:21:35.677 STDOUT terraform:  } 2025-07-06 19:21:35.679057 | orchestrator | 19:21:35.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-06 19:21:35.679063 | orchestrator | 19:21:35.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:35.679070 | orchestrator | 19:21:35.677 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.679082 | orchestrator | 19:21:35.677 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.679089 | orchestrator | 19:21:35.677 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.679095 | orchestrator | 19:21:35.677 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.679102 | orchestrator | 19:21:35.677 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.679108 | orchestrator | 19:21:35.677 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-06 19:21:35.679115 | orchestrator | 19:21:35.677 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.679122 | orchestrator | 19:21:35.677 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.679128 | orchestrator | 19:21:35.677 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.679135 | orchestrator | 19:21:35.677 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.679141 | orchestrator | 19:21:35.677 STDOUT terraform:  } 2025-07-06 19:21:35.679148 | orchestrator | 19:21:35.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-06 19:21:35.679155 | orchestrator | 19:21:35.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:35.679161 | orchestrator | 19:21:35.678 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.679172 | orchestrator | 19:21:35.678 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.679179 | orchestrator | 19:21:35.678 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.679185 | orchestrator | 19:21:35.678 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.679192 | orchestrator | 19:21:35.678 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.679198 | orchestrator | 19:21:35.678 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-06 19:21:35.679205 | orchestrator | 19:21:35.678 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.679211 | orchestrator | 19:21:35.678 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.679218 | orchestrator | 19:21:35.678 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.679225 | orchestrator | 19:21:35.678 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.679231 | orchestrator | 19:21:35.678 STDOUT terraform:  } 2025-07-06 19:21:35.679238 | orchestrator | 19:21:35.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-06 19:21:35.679244 | orchestrator | 19:21:35.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:35.679251 | orchestrator | 19:21:35.678 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.679257 | orchestrator | 19:21:35.678 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.679264 | orchestrator | 19:21:35.678 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.679270 | orchestrator | 19:21:35.678 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.679282 | orchestrator | 19:21:35.678 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.679294 | orchestrator | 19:21:35.678 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-06 19:21:35.679301 | orchestrator | 19:21:35.678 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.679307 | orchestrator | 19:21:35.678 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.679314 | orchestrator | 19:21:35.678 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.679321 | orchestrator | 19:21:35.678 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.679327 | orchestrator | 19:21:35.678 STDOUT terraform:  } 2025-07-06 19:21:35.679334 | orchestrator | 19:21:35.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-06 19:21:35.679345 | orchestrator | 19:21:35.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:35.679352 | orchestrator | 19:21:35.679 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.679358 | orchestrator | 19:21:35.679 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.679365 | orchestrator | 19:21:35.679 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.679371 | orchestrator | 19:21:35.679 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.679378 | orchestrator | 19:21:35.679 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.679384 | orchestrator | 19:21:35.679 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-06 19:21:35.679393 | orchestrator | 19:21:35.679 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.679400 | orchestrator | 19:21:35.679 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.679407 | orchestrator | 19:21:35.679 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.679413 | orchestrator | 19:21:35.679 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.679420 | orchestrator | 19:21:35.679 STDOUT terraform:  } 2025-07-06 19:21:35.679466 | orchestrator | 19:21:35.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-06 19:21:35.679531 | orchestrator | 19:21:35.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:35.679565 | orchestrator | 19:21:35.679 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.679604 | orchestrator | 19:21:35.679 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.679646 | orchestrator | 19:21:35.679 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.679696 | orchestrator | 19:21:35.679 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.679733 | orchestrator | 19:21:35.679 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.679792 | orchestrator | 19:21:35.679 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-06 19:21:35.679840 | orchestrator | 19:21:35.679 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.679867 | orchestrator | 19:21:35.679 STDOUT terraform:  + size = 80 2025-07-06 19:21:35.679920 | orchestrator | 19:21:35.679 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.679946 | orchestrator | 19:21:35.679 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.679956 | orchestrator | 19:21:35.679 STDOUT terraform:  } 2025-07-06 19:21:35.680016 | orchestrator | 19:21:35.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-06 19:21:35.680079 | orchestrator | 19:21:35.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.680126 | orchestrator | 19:21:35.680 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.680148 | orchestrator | 19:21:35.680 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.680186 | orchestrator | 19:21:35.680 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.680235 | orchestrator | 19:21:35.680 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.680274 | orchestrator | 19:21:35.680 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-06 19:21:35.680326 | orchestrator | 19:21:35.680 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.680336 | orchestrator | 19:21:35.680 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.680380 | orchestrator | 19:21:35.680 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.680406 | orchestrator | 19:21:35.680 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.680416 | orchestrator | 19:21:35.680 STDOUT terraform:  } 2025-07-06 19:21:35.680523 | orchestrator | 19:21:35.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-06 19:21:35.680552 | orchestrator | 19:21:35.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.680586 | orchestrator | 19:21:35.680 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.680624 | orchestrator | 19:21:35.680 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.680662 | orchestrator | 19:21:35.680 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.680714 | orchestrator | 19:21:35.680 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.680766 | orchestrator | 19:21:35.680 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-06 19:21:35.680803 | orchestrator | 19:21:35.680 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.680837 | orchestrator | 19:21:35.680 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.680861 | orchestrator | 19:21:35.680 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.680886 | orchestrator | 19:21:35.680 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.680913 | orchestrator | 19:21:35.680 STDOUT terraform:  } 2025-07-06 19:21:35.680959 | orchestrator | 19:21:35.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-06 19:21:35.681014 | orchestrator | 19:21:35.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.681049 | orchestrator | 19:21:35.681 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.681086 | orchestrator | 19:21:35.681 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.681123 | orchestrator | 19:21:35.681 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.681172 | orchestrator | 19:21:35.681 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.681224 | orchestrator | 19:21:35.681 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-06 19:21:35.681261 | orchestrator | 19:21:35.681 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.681282 | orchestrator | 19:21:35.681 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.681325 | orchestrator | 19:21:35.681 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.681335 | orchestrator | 19:21:35.681 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.681344 | orchestrator | 19:21:35.681 STDOUT terraform:  } 2025-07-06 19:21:35.681407 | orchestrator | 19:21:35.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-06 19:21:35.681491 | orchestrator | 19:21:35.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.681540 | orchestrator | 19:21:35.681 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.681565 | orchestrator | 19:21:35.681 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.681617 | orchestrator | 19:21:35.681 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.681653 | orchestrator | 19:21:35.681 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.681707 | orchestrator | 19:21:35.681 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-06 19:21:35.681746 | orchestrator | 19:21:35.681 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.681782 | orchestrator | 19:21:35.681 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.681808 | orchestrator | 19:21:35.681 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.681848 | orchestrator | 19:21:35.681 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.681858 | orchestrator | 19:21:35.681 STDOUT terraform:  } 2025-07-06 19:21:35.681902 | orchestrator | 19:21:35.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-06 19:21:35.681957 | orchestrator | 19:21:35.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.681993 | orchestrator | 19:21:35.681 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.686072 | orchestrator | 19:21:35.681 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686111 | orchestrator | 19:21:35.682 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686119 | orchestrator | 19:21:35.682 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.686125 | orchestrator | 19:21:35.682 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-06 19:21:35.686131 | orchestrator | 19:21:35.682 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686154 | orchestrator | 19:21:35.682 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.686160 | orchestrator | 19:21:35.682 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.686165 | orchestrator | 19:21:35.682 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.686171 | orchestrator | 19:21:35.682 STDOUT terraform:  } 2025-07-06 19:21:35.686177 | orchestrator | 19:21:35.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-06 19:21:35.686184 | orchestrator | 19:21:35.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.686189 | orchestrator | 19:21:35.682 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.686195 | orchestrator | 19:21:35.682 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686200 | orchestrator | 19:21:35.682 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686206 | orchestrator | 19:21:35.682 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.686211 | orchestrator | 19:21:35.682 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-06 19:21:35.686216 | orchestrator | 19:21:35.682 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686229 | orchestrator | 19:21:35.682 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.686235 | orchestrator | 19:21:35.682 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.686240 | orchestrator | 19:21:35.682 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.686246 | orchestrator | 19:21:35.682 STDOUT terraform:  } 2025-07-06 19:21:35.686251 | orchestrator | 19:21:35.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-06 19:21:35.686257 | orchestrator | 19:21:35.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.686262 | orchestrator | 19:21:35.682 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.686268 | orchestrator | 19:21:35.682 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686273 | orchestrator | 19:21:35.682 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686278 | orchestrator | 19:21:35.682 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.686284 | orchestrator | 19:21:35.682 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-06 19:21:35.686289 | orchestrator | 19:21:35.682 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686294 | orchestrator | 19:21:35.682 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.686300 | orchestrator | 19:21:35.682 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.686305 | orchestrator | 19:21:35.682 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.686311 | orchestrator | 19:21:35.682 STDOUT terraform:  } 2025-07-06 19:21:35.686316 | orchestrator | 19:21:35.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-06 19:21:35.686321 | orchestrator | 19:21:35.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.686331 | orchestrator | 19:21:35.683 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.686336 | orchestrator | 19:21:35.683 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686349 | orchestrator | 19:21:35.683 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686355 | orchestrator | 19:21:35.683 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.686360 | orchestrator | 19:21:35.683 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-06 19:21:35.686365 | orchestrator | 19:21:35.683 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686371 | orchestrator | 19:21:35.683 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.686376 | orchestrator | 19:21:35.683 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.686381 | orchestrator | 19:21:35.683 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.686387 | orchestrator | 19:21:35.683 STDOUT terraform:  } 2025-07-06 19:21:35.686392 | orchestrator | 19:21:35.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-06 19:21:35.686397 | orchestrator | 19:21:35.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:35.686403 | orchestrator | 19:21:35.683 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:35.686408 | orchestrator | 19:21:35.683 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686413 | orchestrator | 19:21:35.683 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686419 | orchestrator | 19:21:35.683 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:35.686424 | orchestrator | 19:21:35.683 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-06 19:21:35.686429 | orchestrator | 19:21:35.683 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686435 | orchestrator | 19:21:35.683 STDOUT terraform:  + size = 20 2025-07-06 19:21:35.686440 | orchestrator | 19:21:35.683 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:35.686445 | orchestrator | 19:21:35.683 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:35.686473 | orchestrator | 19:21:35.683 STDOUT terraform:  } 2025-07-06 19:21:35.686508 | orchestrator | 19:21:35.683 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-06 19:21:35.686519 | orchestrator | 19:21:35.683 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-06 19:21:35.686528 | orchestrator | 19:21:35.683 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.686537 | orchestrator | 19:21:35.683 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.686543 | orchestrator | 19:21:35.683 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.686548 | orchestrator | 19:21:35.683 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.686553 | orchestrator | 19:21:35.683 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686565 | orchestrator | 19:21:35.683 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.686570 | orchestrator | 19:21:35.683 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.686576 | orchestrator | 19:21:35.683 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.686581 | orchestrator | 19:21:35.683 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-06 19:21:35.686586 | orchestrator | 19:21:35.683 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.686592 | orchestrator | 19:21:35.683 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.686597 | orchestrator | 19:21:35.684 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686602 | orchestrator | 19:21:35.684 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.686608 | orchestrator | 19:21:35.684 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.686613 | orchestrator | 19:21:35.684 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.686624 | orchestrator | 19:21:35.684 STDOUT terraform:  + name = "testbed-manager" 2025-07-06 19:21:35.686630 | orchestrator | 19:21:35.684 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.686635 | orchestrator | 19:21:35.684 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686641 | orchestrator | 19:21:35.684 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.686646 | orchestrator | 19:21:35.684 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.686651 | orchestrator | 19:21:35.684 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.686657 | orchestrator | 19:21:35.684 STDOUT terraform:  + user_data = (sensitive value) 2025-07-06 19:21:35.686662 | orchestrator | 19:21:35.684 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.686667 | orchestrator | 19:21:35.684 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.686673 | orchestrator | 19:21:35.684 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.686678 | orchestrator | 19:21:35.684 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.686683 | orchestrator | 19:21:35.684 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.686689 | orchestrator | 19:21:35.684 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.686694 | orchestrator | 19:21:35.684 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.686699 | orchestrator | 19:21:35.684 STDOUT terraform:  } 2025-07-06 19:21:35.686705 | orchestrator | 19:21:35.684 STDOUT terraform:  + network { 2025-07-06 19:21:35.686710 | orchestrator | 19:21:35.684 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.686715 | orchestrator | 19:21:35.684 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.686721 | orchestrator | 19:21:35.684 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.686726 | orchestrator | 19:21:35.684 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.686737 | orchestrator | 19:21:35.684 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.686742 | orchestrator | 19:21:35.684 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.686747 | orchestrator | 19:21:35.684 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.686753 | orchestrator | 19:21:35.684 STDOUT terraform:  } 2025-07-06 19:21:35.686758 | orchestrator | 19:21:35.684 STDOUT terraform:  } 2025-07-06 19:21:35.686764 | orchestrator | 19:21:35.684 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-06 19:21:35.686769 | orchestrator | 19:21:35.684 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:35.686775 | orchestrator | 19:21:35.684 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.686784 | orchestrator | 19:21:35.684 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.686789 | orchestrator | 19:21:35.684 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.686794 | orchestrator | 19:21:35.684 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.686800 | orchestrator | 19:21:35.684 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.686805 | orchestrator | 19:21:35.684 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.686810 | orchestrator | 19:21:35.684 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.686816 | orchestrator | 19:21:35.684 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.686821 | orchestrator | 19:21:35.684 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:35.686826 | orchestrator | 19:21:35.685 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.686832 | orchestrator | 19:21:35.685 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.686837 | orchestrator | 19:21:35.685 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.686846 | orchestrator | 19:21:35.685 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.686851 | orchestrator | 19:21:35.685 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.686857 | orchestrator | 19:21:35.685 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.686862 | orchestrator | 19:21:35.685 STDOUT terraform:  + name = "testbed-node-0" 2025-07-06 19:21:35.686867 | orchestrator | 19:21:35.685 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.686873 | orchestrator | 19:21:35.685 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.686878 | orchestrator | 19:21:35.685 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.686883 | orchestrator | 19:21:35.685 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.686889 | orchestrator | 19:21:35.685 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.686894 | orchestrator | 19:21:35.685 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:35.686899 | orchestrator | 19:21:35.685 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.686911 | orchestrator | 19:21:35.685 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.686916 | orchestrator | 19:21:35.685 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.686921 | orchestrator | 19:21:35.685 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.686927 | orchestrator | 19:21:35.685 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.686932 | orchestrator | 19:21:35.685 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.686937 | orchestrator | 19:21:35.685 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.686943 | orchestrator | 19:21:35.685 STDOUT terraform:  } 2025-07-06 19:21:35.686948 | orchestrator | 19:21:35.685 STDOUT terraform:  + network { 2025-07-06 19:21:35.686971 | orchestrator | 19:21:35.685 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.686977 | orchestrator | 19:21:35.685 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.686982 | orchestrator | 19:21:35.685 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.686996 | orchestrator | 19:21:35.685 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.687002 | orchestrator | 19:21:35.685 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.687007 | orchestrator | 19:21:35.685 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.687012 | orchestrator | 19:21:35.685 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.687018 | orchestrator | 19:21:35.685 STDOUT terraform:  } 2025-07-06 19:21:35.687023 | orchestrator | 19:21:35.685 STDOUT terraform:  } 2025-07-06 19:21:35.687028 | orchestrator | 19:21:35.685 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-06 19:21:35.687034 | orchestrator | 19:21:35.685 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:35.687039 | orchestrator | 19:21:35.685 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.687045 | orchestrator | 19:21:35.685 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.687050 | orchestrator | 19:21:35.685 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.687055 | orchestrator | 19:21:35.685 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.687061 | orchestrator | 19:21:35.685 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.687066 | orchestrator | 19:21:35.686 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.687071 | orchestrator | 19:21:35.686 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.687076 | orchestrator | 19:21:35.686 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.687085 | orchestrator | 19:21:35.686 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:35.687091 | orchestrator | 19:21:35.686 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.687100 | orchestrator | 19:21:35.686 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.687110 | orchestrator | 19:21:35.686 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.687115 | orchestrator | 19:21:35.686 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.687121 | orchestrator | 19:21:35.686 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.687126 | orchestrator | 19:21:35.686 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.687131 | orchestrator | 19:21:35.686 STDOUT terraform:  + name = "testbed-node-1" 2025-07-06 19:21:35.687137 | orchestrator | 19:21:35.686 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.687142 | orchestrator | 19:21:35.686 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.687147 | orchestrator | 19:21:35.686 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.687153 | orchestrator | 19:21:35.686 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.687158 | orchestrator | 19:21:35.686 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.687164 | orchestrator | 19:21:35.686 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:35.687169 | orchestrator | 19:21:35.686 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.687174 | orchestrator | 19:21:35.686 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.687183 | orchestrator | 19:21:35.686 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.687189 | orchestrator | 19:21:35.686 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.687194 | orchestrator | 19:21:35.686 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.687199 | orchestrator | 19:21:35.686 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.687205 | orchestrator | 19:21:35.686 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.687210 | orchestrator | 19:21:35.686 STDOUT terraform:  } 2025-07-06 19:21:35.687215 | orchestrator | 19:21:35.686 STDOUT terraform:  + network { 2025-07-06 19:21:35.687221 | orchestrator | 19:21:35.686 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.687226 | orchestrator | 19:21:35.686 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.687232 | orchestrator | 19:21:35.686 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.687237 | orchestrator | 19:21:35.686 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.687242 | orchestrator | 19:21:35.686 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.687247 | orchestrator | 19:21:35.686 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.687253 | orchestrator | 19:21:35.686 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.687258 | orchestrator | 19:21:35.686 STDOUT terraform:  } 2025-07-06 19:21:35.687264 | orchestrator | 19:21:35.686 STDOUT terraform:  } 2025-07-06 19:21:35.687269 | orchestrator | 19:21:35.686 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-06 19:21:35.687274 | orchestrator | 19:21:35.686 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:35.687284 | orchestrator | 19:21:35.687 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.687292 | orchestrator | 19:21:35.687 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.687297 | orchestrator | 19:21:35.687 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.687303 | orchestrator | 19:21:35.687 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.687308 | orchestrator | 19:21:35.687 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.687313 | orchestrator | 19:21:35.687 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.687318 | orchestrator | 19:21:35.687 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.687324 | orchestrator | 19:21:35.687 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.687329 | orchestrator | 19:21:35.687 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:35.687334 | orchestrator | 19:21:35.687 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.687342 | orchestrator | 19:21:35.687 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.689629 | orchestrator | 19:21:35.687 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.689677 | orchestrator | 19:21:35.687 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.689684 | orchestrator | 19:21:35.687 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.689689 | orchestrator | 19:21:35.687 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.689694 | orchestrator | 19:21:35.687 STDOUT terraform:  + name = "testbed-node-2" 2025-07-06 19:21:35.689699 | orchestrator | 19:21:35.687 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.689704 | orchestrator | 19:21:35.687 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.689709 | orchestrator | 19:21:35.687 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.689714 | orchestrator | 19:21:35.687 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.689719 | orchestrator | 19:21:35.687 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.689724 | orchestrator | 19:21:35.687 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:35.689730 | orchestrator | 19:21:35.687 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.689735 | orchestrator | 19:21:35.687 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.689740 | orchestrator | 19:21:35.687 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.689757 | orchestrator | 19:21:35.687 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.689763 | orchestrator | 19:21:35.687 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.689768 | orchestrator | 19:21:35.687 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.689772 | orchestrator | 19:21:35.687 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.689787 | orchestrator | 19:21:35.687 STDOUT terraform:  } 2025-07-06 19:21:35.689793 | orchestrator | 19:21:35.687 STDOUT terraform:  + network { 2025-07-06 19:21:35.689797 | orchestrator | 19:21:35.687 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.689802 | orchestrator | 19:21:35.687 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.689807 | orchestrator | 19:21:35.687 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.689812 | orchestrator | 19:21:35.687 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.689817 | orchestrator | 19:21:35.688 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.689821 | orchestrator | 19:21:35.688 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.689826 | orchestrator | 19:21:35.688 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.689831 | orchestrator | 19:21:35.688 STDOUT terraform:  } 2025-07-06 19:21:35.689838 | orchestrator | 19:21:35.688 STDOUT terraform:  } 2025-07-06 19:21:35.689843 | orchestrator | 19:21:35.688 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-06 19:21:35.689849 | orchestrator | 19:21:35.688 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:35.689853 | orchestrator | 19:21:35.688 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.689858 | orchestrator | 19:21:35.688 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.689863 | orchestrator | 19:21:35.688 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.689868 | orchestrator | 19:21:35.688 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.689872 | orchestrator | 19:21:35.688 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.689877 | orchestrator | 19:21:35.688 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.689882 | orchestrator | 19:21:35.688 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.689894 | orchestrator | 19:21:35.688 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.689899 | orchestrator | 19:21:35.688 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:35.689904 | orchestrator | 19:21:35.688 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.689909 | orchestrator | 19:21:35.688 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.689913 | orchestrator | 19:21:35.688 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.689918 | orchestrator | 19:21:35.688 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.689923 | orchestrator | 19:21:35.688 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.689927 | orchestrator | 19:21:35.688 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.689932 | orchestrator | 19:21:35.688 STDOUT terraform:  + name = "testbed-node-3" 2025-07-06 19:21:35.689937 | orchestrator | 19:21:35.688 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.689946 | orchestrator | 19:21:35.688 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.689951 | orchestrator | 19:21:35.688 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.689956 | orchestrator | 19:21:35.688 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.689961 | orchestrator | 19:21:35.688 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.689965 | orchestrator | 19:21:35.688 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:35.689970 | orchestrator | 19:21:35.688 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.689975 | orchestrator | 19:21:35.688 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.689980 | orchestrator | 19:21:35.688 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.689984 | orchestrator | 19:21:35.688 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.689989 | orchestrator | 19:21:35.688 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.689994 | orchestrator | 19:21:35.688 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.689999 | orchestrator | 19:21:35.688 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.690004 | orchestrator | 19:21:35.688 STDOUT terraform:  } 2025-07-06 19:21:35.690008 | orchestrator | 19:21:35.688 STDOUT terraform:  + network { 2025-07-06 19:21:35.690013 | orchestrator | 19:21:35.688 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.690034 | orchestrator | 19:21:35.689 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.690038 | orchestrator | 19:21:35.689 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.690043 | orchestrator | 19:21:35.689 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.690048 | orchestrator | 19:21:35.689 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.690053 | orchestrator | 19:21:35.689 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.690057 | orchestrator | 19:21:35.689 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.690062 | orchestrator | 19:21:35.689 STDOUT terraform:  } 2025-07-06 19:21:35.690067 | orchestrator | 19:21:35.689 STDOUT terraform:  } 2025-07-06 19:21:35.690072 | orchestrator | 19:21:35.689 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-06 19:21:35.690077 | orchestrator | 19:21:35.689 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:35.690081 | orchestrator | 19:21:35.689 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.690086 | orchestrator | 19:21:35.689 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.690091 | orchestrator | 19:21:35.689 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.690099 | orchestrator | 19:21:35.689 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.690104 | orchestrator | 19:21:35.689 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.690113 | orchestrator | 19:21:35.689 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.690122 | orchestrator | 19:21:35.689 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.690127 | orchestrator | 19:21:35.689 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.690132 | orchestrator | 19:21:35.689 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:35.690137 | orchestrator | 19:21:35.689 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.690141 | orchestrator | 19:21:35.689 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.690146 | orchestrator | 19:21:35.689 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.690151 | orchestrator | 19:21:35.689 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.690159 | orchestrator | 19:21:35.689 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.690164 | orchestrator | 19:21:35.689 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.690169 | orchestrator | 19:21:35.689 STDOUT terraform:  + name = "testbed-node-4" 2025-07-06 19:21:35.690174 | orchestrator | 19:21:35.689 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.690179 | orchestrator | 19:21:35.689 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.690184 | orchestrator | 19:21:35.689 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.690189 | orchestrator | 19:21:35.689 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.690194 | orchestrator | 19:21:35.689 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.690198 | orchestrator | 19:21:35.689 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:35.690203 | orchestrator | 19:21:35.689 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.690208 | orchestrator | 19:21:35.689 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.690213 | orchestrator | 19:21:35.689 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.690218 | orchestrator | 19:21:35.689 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.690227 | orchestrator | 19:21:35.689 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.690232 | orchestrator | 19:21:35.690 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.690237 | orchestrator | 19:21:35.690 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.690242 | orchestrator | 19:21:35.690 STDOUT terraform:  } 2025-07-06 19:21:35.690249 | orchestrator | 19:21:35.690 STDOUT terraform:  + network { 2025-07-06 19:21:35.690254 | orchestrator | 19:21:35.690 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.690258 | orchestrator | 19:21:35.690 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.690263 | orchestrator | 19:21:35.690 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.690268 | orchestrator | 19:21:35.690 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.690276 | orchestrator | 19:21:35.690 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.690283 | orchestrator | 19:21:35.690 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.690288 | orchestrator | 19:21:35.690 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.690295 | orchestrator | 19:21:35.690 STDOUT terraform:  } 2025-07-06 19:21:35.690301 | orchestrator | 19:21:35.690 STDOUT terraform:  } 2025-07-06 19:21:35.690350 | orchestrator | 19:21:35.690 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-06 19:21:35.690575 | orchestrator | 19:21:35.690 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:35.690589 | orchestrator | 19:21:35.690 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:35.690594 | orchestrator | 19:21:35.690 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:35.690599 | orchestrator | 19:21:35.690 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:35.690603 | orchestrator | 19:21:35.690 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.690608 | orchestrator | 19:21:35.690 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:35.690624 | orchestrator | 19:21:35.690 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:35.690632 | orchestrator | 19:21:35.690 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:35.690637 | orchestrator | 19:21:35.690 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:35.690644 | orchestrator | 19:21:35.690 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:35.693071 | orchestrator | 19:21:35.690 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:35.693113 | orchestrator | 19:21:35.690 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:35.693118 | orchestrator | 19:21:35.690 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693123 | orchestrator | 19:21:35.690 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:35.693128 | orchestrator | 19:21:35.690 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:35.693132 | orchestrator | 19:21:35.690 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:35.693137 | orchestrator | 19:21:35.690 STDOUT terraform:  + name = "testbed-node-5" 2025-07-06 19:21:35.693141 | orchestrator | 19:21:35.690 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:35.693146 | orchestrator | 19:21:35.690 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693151 | orchestrator | 19:21:35.690 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:35.693155 | orchestrator | 19:21:35.690 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:35.693160 | orchestrator | 19:21:35.690 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:35.693165 | orchestrator | 19:21:35.690 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:35.693170 | orchestrator | 19:21:35.691 STDOUT terraform:  + block_device { 2025-07-06 19:21:35.693186 | orchestrator | 19:21:35.691 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:35.693191 | orchestrator | 19:21:35.691 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:35.693196 | orchestrator | 19:21:35.691 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:35.693200 | orchestrator | 19:21:35.691 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:35.693204 | orchestrator | 19:21:35.691 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:35.693209 | orchestrator | 19:21:35.691 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.693214 | orchestrator | 19:21:35.691 STDOUT terraform:  } 2025-07-06 19:21:35.693218 | orchestrator | 19:21:35.691 STDOUT terraform:  + network { 2025-07-06 19:21:35.693223 | orchestrator | 19:21:35.691 STDOUT terraform:  + access_network = false 2025-07-06 19:21:35.693227 | orchestrator | 19:21:35.691 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:35.693232 | orchestrator | 19:21:35.691 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:35.693236 | orchestrator | 19:21:35.691 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:35.693241 | orchestrator | 19:21:35.691 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:35.693252 | orchestrator | 19:21:35.691 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:35.693256 | orchestrator | 19:21:35.691 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:35.693261 | orchestrator | 19:21:35.691 STDOUT terraform:  } 2025-07-06 19:21:35.693266 | orchestrator | 19:21:35.691 STDOUT terraform:  } 2025-07-06 19:21:35.693270 | orchestrator | 19:21:35.691 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-06 19:21:35.693275 | orchestrator | 19:21:35.691 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-06 19:21:35.693279 | orchestrator | 19:21:35.691 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-06 19:21:35.693284 | orchestrator | 19:21:35.691 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693288 | orchestrator | 19:21:35.691 STDOUT terraform:  + name = "testbed" 2025-07-06 19:21:35.693293 | orchestrator | 19:21:35.691 STDOUT terraform:  + private_key = (sensitive value) 2025-07-06 19:21:35.693297 | orchestrator | 19:21:35.691 STDOUT terraform:  + public_key = (known after apply) 2025-07-06 19:21:35.693310 | orchestrator | 19:21:35.691 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693318 | orchestrator | 19:21:35.691 STDOUT terraform:  + user_id = (known after apply) 2025-07-06 19:21:35.693322 | orchestrator | 19:21:35.691 STDOUT terraform:  } 2025-07-06 19:21:35.693327 | orchestrator | 19:21:35.691 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-06 19:21:35.693333 | orchestrator | 19:21:35.691 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.693340 | orchestrator | 19:21:35.691 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.693347 | orchestrator | 19:21:35.691 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693430 | orchestrator | 19:21:35.691 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.693436 | orchestrator | 19:21:35.691 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693441 | orchestrator | 19:21:35.691 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.693446 | orchestrator | 19:21:35.691 STDOUT terraform:  } 2025-07-06 19:21:35.693472 | orchestrator | 19:21:35.691 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-06 19:21:35.693477 | orchestrator | 19:21:35.691 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.693482 | orchestrator | 19:21:35.691 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.693487 | orchestrator | 19:21:35.691 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693491 | orchestrator | 19:21:35.691 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.693496 | orchestrator | 19:21:35.692 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693500 | orchestrator | 19:21:35.692 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.693505 | orchestrator | 19:21:35.692 STDOUT terraform:  } 2025-07-06 19:21:35.693509 | orchestrator | 19:21:35.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-06 19:21:35.693514 | orchestrator | 19:21:35.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.693518 | orchestrator | 19:21:35.693 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.693523 | orchestrator | 19:21:35.693 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693527 | orchestrator | 19:21:35.693 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.693532 | orchestrator | 19:21:35.693 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693536 | orchestrator | 19:21:35.693 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.693541 | orchestrator | 19:21:35.693 STDOUT terraform:  } 2025-07-06 19:21:35.693549 | orchestrator | 19:21:35.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-06 19:21:35.693554 | orchestrator | 19:21:35.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.693558 | orchestrator | 19:21:35.693 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.693563 | orchestrator | 19:21:35.693 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693567 | orchestrator | 19:21:35.693 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.693572 | orchestrator | 19:21:35.693 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693576 | orchestrator | 19:21:35.693 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.693581 | orchestrator | 19:21:35.693 STDOUT terraform:  } 2025-07-06 19:21:35.693587 | orchestrator | 19:21:35.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-06 19:21:35.693641 | orchestrator | 19:21:35.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.693668 | orchestrator | 19:21:35.693 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.693697 | orchestrator | 19:21:35.693 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693726 | orchestrator | 19:21:35.693 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.693754 | orchestrator | 19:21:35.693 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.693783 | orchestrator | 19:21:35.693 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.693791 | orchestrator | 19:21:35.693 STDOUT terraform:  } 2025-07-06 19:21:35.693842 | orchestrator | 19:21:35.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-06 19:21:35.693890 | orchestrator | 19:21:35.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.693918 | orchestrator | 19:21:35.693 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.693947 | orchestrator | 19:21:35.693 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.693975 | orchestrator | 19:21:35.693 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.694003 | orchestrator | 19:21:35.693 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.694060 | orchestrator | 19:21:35.693 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.694070 | orchestrator | 19:21:35.694 STDOUT terraform:  } 2025-07-06 19:21:35.694121 | orchestrator | 19:21:35.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-06 19:21:35.694169 | orchestrator | 19:21:35.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.694205 | orchestrator | 19:21:35.694 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.694228 | orchestrator | 19:21:35.694 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.694257 | orchestrator | 19:21:35.694 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.694287 | orchestrator | 19:21:35.694 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.694313 | orchestrator | 19:21:35.694 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.694321 | orchestrator | 19:21:35.694 STDOUT terraform:  } 2025-07-06 19:21:35.694373 | orchestrator | 19:21:35.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-06 19:21:35.694420 | orchestrator | 19:21:35.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.694448 | orchestrator | 19:21:35.694 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.694505 | orchestrator | 19:21:35.694 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.694534 | orchestrator | 19:21:35.694 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.694563 | orchestrator | 19:21:35.694 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.694594 | orchestrator | 19:21:35.694 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.694602 | orchestrator | 19:21:35.694 STDOUT terraform:  } 2025-07-06 19:21:35.694653 | orchestrator | 19:21:35.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-06 19:21:35.694702 | orchestrator | 19:21:35.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:35.694734 | orchestrator | 19:21:35.694 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:35.694759 | orchestrator | 19:21:35.694 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.694788 | orchestrator | 19:21:35.694 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:35.694817 | orchestrator | 19:21:35.694 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.694846 | orchestrator | 19:21:35.694 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:35.694853 | orchestrator | 19:21:35.694 STDOUT terraform:  } 2025-07-06 19:21:35.694916 | orchestrator | 19:21:35.694 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-06 19:21:35.694970 | orchestrator | 19:21:35.694 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-06 19:21:35.694999 | orchestrator | 19:21:35.694 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-06 19:21:35.695026 | orchestrator | 19:21:35.694 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-06 19:21:35.695055 | orchestrator | 19:21:35.695 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.695084 | orchestrator | 19:21:35.695 STDOUT terraform:  + port_id = (known after apply) 2025-07-06 19:21:35.695111 | orchestrator | 19:21:35.695 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.695127 | orchestrator | 19:21:35.695 STDOUT terraform:  } 2025-07-06 19:21:35.695175 | orchestrator | 19:21:35.695 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-06 19:21:35.695221 | orchestrator | 19:21:35.695 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-06 19:21:35.695246 | orchestrator | 19:21:35.695 STDOUT terraform:  + address = (known after apply) 2025-07-06 19:21:35.695272 | orchestrator | 19:21:35.695 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.695299 | orchestrator | 19:21:35.695 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-06 19:21:35.695324 | orchestrator | 19:21:35.695 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.695347 | orchestrator | 19:21:35.695 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-06 19:21:35.695372 | orchestrator | 19:21:35.695 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.695392 | orchestrator | 19:21:35.695 STDOUT terraform:  + pool = "public" 2025-07-06 19:21:35.695419 | orchestrator | 19:21:35.695 STDOUT terraform:  + port_id = (known after apply) 2025-07-06 19:21:35.695445 | orchestrator | 19:21:35.695 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.695507 | orchestrator | 19:21:35.695 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.695519 | orchestrator | 19:21:35.695 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.695525 | orchestrator | 19:21:35.695 STDOUT terraform:  } 2025-07-06 19:21:35.695564 | orchestrator | 19:21:35.695 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-06 19:21:35.695601 | orchestrator | 19:21:35.695 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-06 19:21:35.695637 | orchestrator | 19:21:35.695 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.695674 | orchestrator | 19:21:35.695 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.695696 | orchestrator | 19:21:35.695 STDOUT terraform:  + availability_zone_hints = [ 2025-07-06 19:21:35.695711 | orchestrator | 19:21:35.695 STDOUT terraform:  + "nova", 2025-07-06 19:21:35.695718 | orchestrator | 19:21:35.695 STDOUT terraform:  ] 2025-07-06 19:21:35.695757 | orchestrator | 19:21:35.695 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-06 19:21:35.695792 | orchestrator | 19:21:35.695 STDOUT terraform:  + external = (known after apply) 2025-07-06 19:21:35.695830 | orchestrator | 19:21:35.695 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.695867 | orchestrator | 19:21:35.695 STDOUT terraform:  + mtu = (known after apply) 2025-07-06 19:21:35.695906 | orchestrator | 19:21:35.695 STDOUT terraform:  + name = "net-testbed-management" 2025-07-06 19:21:35.695942 | orchestrator | 19:21:35.695 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.695978 | orchestrator | 19:21:35.695 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.696016 | orchestrator | 19:21:35.695 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.696055 | orchestrator | 19:21:35.696 STDOUT terraform:  + shared = (known after apply) 2025-07-06 19:21:35.696089 | orchestrator | 19:21:35.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.696128 | orchestrator | 19:21:35.696 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-06 19:21:35.696153 | orchestrator | 19:21:35.696 STDOUT terraform:  + segments (known after apply) 2025-07-06 19:21:35.696167 | orchestrator | 19:21:35.696 STDOUT terraform:  } 2025-07-06 19:21:35.696207 | orchestrator | 19:21:35.696 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-06 19:21:35.696260 | orchestrator | 19:21:35.696 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-06 19:21:35.696301 | orchestrator | 19:21:35.696 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.696337 | orchestrator | 19:21:35.696 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.696371 | orchestrator | 19:21:35.696 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.697710 | orchestrator | 19:21:35.696 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.697740 | orchestrator | 19:21:35.696 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.697754 | orchestrator | 19:21:35.696 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.697758 | orchestrator | 19:21:35.696 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.697762 | orchestrator | 19:21:35.696 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.697767 | orchestrator | 19:21:35.696 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.697771 | orchestrator | 19:21:35.696 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.697775 | orchestrator | 19:21:35.696 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.697779 | orchestrator | 19:21:35.696 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.697783 | orchestrator | 19:21:35.696 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.697787 | orchestrator | 19:21:35.696 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.697790 | orchestrator | 19:21:35.696 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.697794 | orchestrator | 19:21:35.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.697798 | orchestrator | 19:21:35.696 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.697802 | orchestrator | 19:21:35.696 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.697806 | orchestrator | 19:21:35.696 STDOUT terraform:  } 2025-07-06 19:21:35.697810 | orchestrator | 19:21:35.696 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.697813 | orchestrator | 19:21:35.696 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.697817 | orchestrator | 19:21:35.696 STDOUT terraform:  } 2025-07-06 19:21:35.697821 | orchestrator | 19:21:35.696 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.697825 | orchestrator | 19:21:35.696 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.697828 | orchestrator | 19:21:35.696 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-06 19:21:35.697832 | orchestrator | 19:21:35.696 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.697836 | orchestrator | 19:21:35.696 STDOUT terraform:  } 2025-07-06 19:21:35.697839 | orchestrator | 19:21:35.696 STDOUT terraform:  } 2025-07-06 19:21:35.697843 | orchestrator | 19:21:35.696 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-06 19:21:35.697848 | orchestrator | 19:21:35.697 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:35.697852 | orchestrator | 19:21:35.697 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.697855 | orchestrator | 19:21:35.697 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.697859 | orchestrator | 19:21:35.697 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.697863 | orchestrator | 19:21:35.697 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.697867 | orchestrator | 19:21:35.697 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.697874 | orchestrator | 19:21:35.697 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.697878 | orchestrator | 19:21:35.697 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.697882 | orchestrator | 19:21:35.697 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.697885 | orchestrator | 19:21:35.697 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.697895 | orchestrator | 19:21:35.697 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.697905 | orchestrator | 19:21:35.697 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.697909 | orchestrator | 19:21:35.697 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.697912 | orchestrator | 19:21:35.697 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.697916 | orchestrator | 19:21:35.697 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.697920 | orchestrator | 19:21:35.697 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.697923 | orchestrator | 19:21:35.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.697927 | orchestrator | 19:21:35.697 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.697931 | orchestrator | 19:21:35.697 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.697935 | orchestrator | 19:21:35.697 STDOUT terraform:  } 2025-07-06 19:21:35.697938 | orchestrator | 19:21:35.697 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.697942 | orchestrator | 19:21:35.697 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:35.697946 | orchestrator | 19:21:35.697 STDOUT terraform:  } 2025-07-06 19:21:35.697949 | orchestrator | 19:21:35.697 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.697953 | orchestrator | 19:21:35.697 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.697957 | orchestrator | 19:21:35.697 STDOUT terraform:  } 2025-07-06 19:21:35.697961 | orchestrator | 19:21:35.697 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.697965 | orchestrator | 19:21:35.697 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:35.697968 | orchestrator | 19:21:35.697 STDOUT terraform:  } 2025-07-06 19:21:35.697972 | orchestrator | 19:21:35.697 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.697976 | orchestrator | 19:21:35.697 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.697980 | orchestrator | 19:21:35.697 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-06 19:21:35.697983 | orchestrator | 19:21:35.697 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.697987 | orchestrator | 19:21:35.697 STDOUT terraform:  } 2025-07-06 19:21:35.697991 | orchestrator | 19:21:35.697 STDOUT terraform:  } 2025-07-06 19:21:35.697996 | orchestrator | 19:21:35.697 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-06 19:21:35.698000 | orchestrator | 19:21:35.697 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:35.698010 | orchestrator | 19:21:35.697 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.698068 | orchestrator | 19:21:35.698 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.698103 | orchestrator | 19:21:35.698 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.698146 | orchestrator | 19:21:35.698 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.698178 | orchestrator | 19:21:35.698 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.698230 | orchestrator | 19:21:35.698 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.698268 | orchestrator | 19:21:35.698 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.698306 | orchestrator | 19:21:35.698 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.698345 | orchestrator | 19:21:35.698 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.698382 | orchestrator | 19:21:35.698 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.698420 | orchestrator | 19:21:35.698 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.698478 | orchestrator | 19:21:35.698 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.698548 | orchestrator | 19:21:35.698 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.698568 | orchestrator | 19:21:35.698 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.698605 | orchestrator | 19:21:35.698 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.698674 | orchestrator | 19:21:35.698 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.698681 | orchestrator | 19:21:35.698 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.698686 | orchestrator | 19:21:35.698 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.698700 | orchestrator | 19:21:35.698 STDOUT terraform:  } 2025-07-06 19:21:35.698721 | orchestrator | 19:21:35.698 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.698750 | orchestrator | 19:21:35.698 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:35.698756 | orchestrator | 19:21:35.698 STDOUT terraform:  } 2025-07-06 19:21:35.698782 | orchestrator | 19:21:35.698 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.698809 | orchestrator | 19:21:35.698 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.698816 | orchestrator | 19:21:35.698 STDOUT terraform:  } 2025-07-06 19:21:35.698837 | orchestrator | 19:21:35.698 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.698865 | orchestrator | 19:21:35.698 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:35.698871 | orchestrator | 19:21:35.698 STDOUT terraform:  } 2025-07-06 19:21:35.698897 | orchestrator | 19:21:35.698 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.698904 | orchestrator | 19:21:35.698 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.698931 | orchestrator | 19:21:35.698 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-06 19:21:35.698959 | orchestrator | 19:21:35.698 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.698966 | orchestrator | 19:21:35.698 STDOUT terraform:  } 2025-07-06 19:21:35.698983 | orchestrator | 19:21:35.698 STDOUT terraform:  } 2025-07-06 19:21:35.699028 | orchestrator | 19:21:35.698 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-06 19:21:35.699073 | orchestrator | 19:21:35.699 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:35.699124 | orchestrator | 19:21:35.699 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.699146 | orchestrator | 19:21:35.699 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.699181 | orchestrator | 19:21:35.699 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.699220 | orchestrator | 19:21:35.699 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.699256 | orchestrator | 19:21:35.699 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.699292 | orchestrator | 19:21:35.699 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.699330 | orchestrator | 19:21:35.699 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.699369 | orchestrator | 19:21:35.699 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.699439 | orchestrator | 19:21:35.699 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.699502 | orchestrator | 19:21:35.699 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.699530 | orchestrator | 19:21:35.699 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.699564 | orchestrator | 19:21:35.699 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.699602 | orchestrator | 19:21:35.699 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.699639 | orchestrator | 19:21:35.699 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.699675 | orchestrator | 19:21:35.699 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.699710 | orchestrator | 19:21:35.699 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.699730 | orchestrator | 19:21:35.699 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.699757 | orchestrator | 19:21:35.699 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.699764 | orchestrator | 19:21:35.699 STDOUT terraform:  } 2025-07-06 19:21:35.699787 | orchestrator | 19:21:35.699 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.699814 | orchestrator | 19:21:35.699 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:35.699821 | orchestrator | 19:21:35.699 STDOUT terraform:  } 2025-07-06 19:21:35.699845 | orchestrator | 19:21:35.699 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.699874 | orchestrator | 19:21:35.699 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.699890 | orchestrator | 19:21:35.699 STDOUT terraform:  } 2025-07-06 19:21:35.699896 | orchestrator | 19:21:35.699 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.699925 | orchestrator | 19:21:35.699 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:35.699932 | orchestrator | 19:21:35.699 STDOUT terraform:  } 2025-07-06 19:21:35.699958 | orchestrator | 19:21:35.699 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.699966 | orchestrator | 19:21:35.699 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.699992 | orchestrator | 19:21:35.699 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-06 19:21:35.700021 | orchestrator | 19:21:35.699 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.700035 | orchestrator | 19:21:35.700 STDOUT terraform:  } 2025-07-06 19:21:35.700049 | orchestrator | 19:21:35.700 STDOUT terraform:  } 2025-07-06 19:21:35.700095 | orchestrator | 19:21:35.700 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-06 19:21:35.700146 | orchestrator | 19:21:35.700 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:35.700185 | orchestrator | 19:21:35.700 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.700220 | orchestrator | 19:21:35.700 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.700255 | orchestrator | 19:21:35.700 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.700291 | orchestrator | 19:21:35.700 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.700326 | orchestrator | 19:21:35.700 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.700362 | orchestrator | 19:21:35.700 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.700402 | orchestrator | 19:21:35.700 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.700433 | orchestrator | 19:21:35.700 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.700500 | orchestrator | 19:21:35.700 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.700535 | orchestrator | 19:21:35.700 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.700570 | orchestrator | 19:21:35.700 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.700605 | orchestrator | 19:21:35.700 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.700640 | orchestrator | 19:21:35.700 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.700678 | orchestrator | 19:21:35.700 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.700714 | orchestrator | 19:21:35.700 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.700749 | orchestrator | 19:21:35.700 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.700768 | orchestrator | 19:21:35.700 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.700801 | orchestrator | 19:21:35.700 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.700814 | orchestrator | 19:21:35.700 STDOUT terraform:  } 2025-07-06 19:21:35.700830 | orchestrator | 19:21:35.700 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.700860 | orchestrator | 19:21:35.700 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:35.700866 | orchestrator | 19:21:35.700 STDOUT terraform:  } 2025-07-06 19:21:35.700890 | orchestrator | 19:21:35.700 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.700919 | orchestrator | 19:21:35.700 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.700926 | orchestrator | 19:21:35.700 STDOUT terraform:  } 2025-07-06 19:21:35.700950 | orchestrator | 19:21:35.700 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.700978 | orchestrator | 19:21:35.700 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:35.700992 | orchestrator | 19:21:35.700 STDOUT terraform:  } 2025-07-06 19:21:35.701015 | orchestrator | 19:21:35.700 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.701030 | orchestrator | 19:21:35.701 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.701054 | orchestrator | 19:21:35.701 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-06 19:21:35.701083 | orchestrator | 19:21:35.701 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.701089 | orchestrator | 19:21:35.701 STDOUT terraform:  } 2025-07-06 19:21:35.701104 | orchestrator | 19:21:35.701 STDOUT terraform:  } 2025-07-06 19:21:35.701152 | orchestrator | 19:21:35.701 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-06 19:21:35.701197 | orchestrator | 19:21:35.701 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:35.701232 | orchestrator | 19:21:35.701 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.701268 | orchestrator | 19:21:35.701 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.701303 | orchestrator | 19:21:35.701 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.701339 | orchestrator | 19:21:35.701 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.701375 | orchestrator | 19:21:35.701 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.701411 | orchestrator | 19:21:35.701 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.701448 | orchestrator | 19:21:35.701 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.701511 | orchestrator | 19:21:35.701 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.701550 | orchestrator | 19:21:35.701 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.701588 | orchestrator | 19:21:35.701 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.701624 | orchestrator | 19:21:35.701 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.701661 | orchestrator | 19:21:35.701 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.701694 | orchestrator | 19:21:35.701 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.701732 | orchestrator | 19:21:35.701 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.701767 | orchestrator | 19:21:35.701 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.701803 | orchestrator | 19:21:35.701 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.701823 | orchestrator | 19:21:35.701 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.701851 | orchestrator | 19:21:35.701 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.701857 | orchestrator | 19:21:35.701 STDOUT terraform:  } 2025-07-06 19:21:35.701883 | orchestrator | 19:21:35.701 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.701913 | orchestrator | 19:21:35.701 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:35.701928 | orchestrator | 19:21:35.701 STDOUT terraform:  } 2025-07-06 19:21:35.701949 | orchestrator | 19:21:35.701 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.701977 | orchestrator | 19:21:35.701 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.701983 | orchestrator | 19:21:35.701 STDOUT terraform:  } 2025-07-06 19:21:35.702005 | orchestrator | 19:21:35.701 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.702075 | orchestrator | 19:21:35.702 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:35.702083 | orchestrator | 19:21:35.702 STDOUT terraform:  } 2025-07-06 19:21:35.702088 | orchestrator | 19:21:35.702 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.702094 | orchestrator | 19:21:35.702 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.702118 | orchestrator | 19:21:35.702 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-06 19:21:35.702149 | orchestrator | 19:21:35.702 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.702156 | orchestrator | 19:21:35.702 STDOUT terraform:  } 2025-07-06 19:21:35.702171 | orchestrator | 19:21:35.702 STDOUT terraform:  } 2025-07-06 19:21:35.702218 | orchestrator | 19:21:35.702 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-06 19:21:35.702263 | orchestrator | 19:21:35.702 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:35.702299 | orchestrator | 19:21:35.702 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.702335 | orchestrator | 19:21:35.702 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:35.702370 | orchestrator | 19:21:35.702 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:35.702407 | orchestrator | 19:21:35.702 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.702445 | orchestrator | 19:21:35.702 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:35.702504 | orchestrator | 19:21:35.702 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:35.702540 | orchestrator | 19:21:35.702 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:35.702577 | orchestrator | 19:21:35.702 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:35.702612 | orchestrator | 19:21:35.702 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.702648 | orchestrator | 19:21:35.702 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:35.702684 | orchestrator | 19:21:35.702 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.702720 | orchestrator | 19:21:35.702 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:35.702782 | orchestrator | 19:21:35.702 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:35.702815 | orchestrator | 19:21:35.702 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.702850 | orchestrator | 19:21:35.702 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:35.702888 | orchestrator | 19:21:35.702 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.702907 | orchestrator | 19:21:35.702 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.702936 | orchestrator | 19:21:35.702 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:35.702943 | orchestrator | 19:21:35.702 STDOUT terraform:  } 2025-07-06 19:21:35.702965 | orchestrator | 19:21:35.702 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.702994 | orchestrator | 19:21:35.702 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:35.703001 | orchestrator | 19:21:35.702 STDOUT terraform:  } 2025-07-06 19:21:35.703023 | orchestrator | 19:21:35.703 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.703052 | orchestrator | 19:21:35.703 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:35.703059 | orchestrator | 19:21:35.703 STDOUT terraform:  } 2025-07-06 19:21:35.703083 | orchestrator | 19:21:35.703 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:35.703111 | orchestrator | 19:21:35.703 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:35.703126 | orchestrator | 19:21:35.703 STDOUT terraform:  } 2025-07-06 19:21:35.703149 | orchestrator | 19:21:35.703 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:35.703164 | orchestrator | 19:21:35.703 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:35.703188 | orchestrator | 19:21:35.703 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-06 19:21:35.703219 | orchestrator | 19:21:35.703 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.703226 | orchestrator | 19:21:35.703 STDOUT terraform:  } 2025-07-06 19:21:35.703244 | orchestrator | 19:21:35.703 STDOUT terraform:  } 2025-07-06 19:21:35.703291 | orchestrator | 19:21:35.703 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-06 19:21:35.703338 | orchestrator | 19:21:35.703 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-06 19:21:35.703359 | orchestrator | 19:21:35.703 STDOUT terraform:  + force_destroy = false 2025-07-06 19:21:35.703388 | orchestrator | 19:21:35.703 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.703417 | orchestrator | 19:21:35.703 STDOUT terraform:  + port_id = (known after apply) 2025-07-06 19:21:35.703447 | orchestrator | 19:21:35.703 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.703489 | orchestrator | 19:21:35.703 STDOUT terraform:  + router_id = (known after apply) 2025-07-06 19:21:35.703518 | orchestrator | 19:21:35.703 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:35.703524 | orchestrator | 19:21:35.703 STDOUT terraform:  } 2025-07-06 19:21:35.703562 | orchestrator | 19:21:35.703 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-06 19:21:35.703599 | orchestrator | 19:21:35.703 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-06 19:21:35.703635 | orchestrator | 19:21:35.703 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:35.703672 | orchestrator | 19:21:35.703 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.703695 | orchestrator | 19:21:35.703 STDOUT terraform:  + availability_zone_hints = [ 2025-07-06 19:21:35.703704 | orchestrator | 19:21:35.703 STDOUT terraform:  + "nova", 2025-07-06 19:21:35.703717 | orchestrator | 19:21:35.703 STDOUT terraform:  ] 2025-07-06 19:21:35.703754 | orchestrator | 19:21:35.703 STDOUT terraform:  + distributed = (known after apply) 2025-07-06 19:21:35.703791 | orchestrator | 19:21:35.703 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-06 19:21:35.703844 | orchestrator | 19:21:35.703 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-06 19:21:35.703922 | orchestrator | 19:21:35.703 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-06 19:21:35.703933 | orchestrator | 19:21:35.703 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.703941 | orchestrator | 19:21:35.703 STDOUT terraform:  + name = "testbed" 2025-07-06 19:21:35.704424 | orchestrator | 19:21:35.703 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.704513 | orchestrator | 19:21:35.703 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.704523 | orchestrator | 19:21:35.704 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-06 19:21:35.704530 | orchestrator | 19:21:35.704 STDOUT terraform:  } 2025-07-06 19:21:35.704537 | orchestrator | 19:21:35.704 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-06 19:21:35.704544 | orchestrator | 19:21:35.704 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-06 19:21:35.704550 | orchestrator | 19:21:35.704 STDOUT terraform:  + description = "ssh" 2025-07-06 19:21:35.704556 | orchestrator | 19:21:35.704 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.704562 | orchestrator | 19:21:35.704 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.704568 | orchestrator | 19:21:35.704 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.704574 | orchestrator | 19:21:35.704 STDOUT terraform:  + port_range_max = 22 2025-07-06 19:21:35.704579 | orchestrator | 19:21:35.704 STDOUT terraform:  + port_range_min = 22 2025-07-06 19:21:35.704601 | orchestrator | 19:21:35.704 STDOUT terraform:  + protocol = "tcp" 2025-07-06 19:21:35.704608 | orchestrator | 19:21:35.704 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.704614 | orchestrator | 19:21:35.704 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.704620 | orchestrator | 19:21:35.704 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.704625 | orchestrator | 19:21:35.704 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.704637 | orchestrator | 19:21:35.704 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.704643 | orchestrator | 19:21:35.704 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.704648 | orchestrator | 19:21:35.704 STDOUT terraform:  } 2025-07-06 19:21:35.704654 | orchestrator | 19:21:35.704 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-06 19:21:35.704660 | orchestrator | 19:21:35.704 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-06 19:21:35.704666 | orchestrator | 19:21:35.704 STDOUT terraform:  + description = "wireguard" 2025-07-06 19:21:35.704675 | orchestrator | 19:21:35.704 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.704701 | orchestrator | 19:21:35.704 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.704737 | orchestrator | 19:21:35.704 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.704761 | orchestrator | 19:21:35.704 STDOUT terraform:  + port_range_max = 51820 2025-07-06 19:21:35.704787 | orchestrator | 19:21:35.704 STDOUT terraform:  + port_range_min = 51820 2025-07-06 19:21:35.704813 | orchestrator | 19:21:35.704 STDOUT terraform:  + protocol = "udp" 2025-07-06 19:21:35.704850 | orchestrator | 19:21:35.704 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.704905 | orchestrator | 19:21:35.704 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.704939 | orchestrator | 19:21:35.704 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.704969 | orchestrator | 19:21:35.704 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.705010 | orchestrator | 19:21:35.704 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.705042 | orchestrator | 19:21:35.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.705048 | orchestrator | 19:21:35.705 STDOUT terraform:  } 2025-07-06 19:21:35.705095 | orchestrator | 19:21:35.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_ru 2025-07-06 19:21:35.705154 | orchestrator | 19:21:35.705 STDOUT terraform: le3 will be created 2025-07-06 19:21:35.705207 | orchestrator | 19:21:35.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-06 19:21:35.705238 | orchestrator | 19:21:35.705 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.705265 | orchestrator | 19:21:35.705 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.705303 | orchestrator | 19:21:35.705 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.705327 | orchestrator | 19:21:35.705 STDOUT terraform:  + protocol = "tcp" 2025-07-06 19:21:35.705365 | orchestrator | 19:21:35.705 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.705401 | orchestrator | 19:21:35.705 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.705438 | orchestrator | 19:21:35.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.705513 | orchestrator | 19:21:35.705 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-06 19:21:35.705572 | orchestrator | 19:21:35.705 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.705612 | orchestrator | 19:21:35.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.705619 | orchestrator | 19:21:35.705 STDOUT terraform:  } 2025-07-06 19:21:35.705673 | orchestrator | 19:21:35.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-06 19:21:35.710936 | orchestrator | 19:21:35.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-06 19:21:35.711015 | orchestrator | 19:21:35.705 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.711022 | orchestrator | 19:21:35.705 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.711027 | orchestrator | 19:21:35.705 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711031 | orchestrator | 19:21:35.705 STDOUT terraform:  + protocol = "udp" 2025-07-06 19:21:35.711036 | orchestrator | 19:21:35.705 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711051 | orchestrator | 19:21:35.705 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.711055 | orchestrator | 19:21:35.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.711059 | orchestrator | 19:21:35.705 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-06 19:21:35.711063 | orchestrator | 19:21:35.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.711066 | orchestrator | 19:21:35.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711070 | orchestrator | 19:21:35.706 STDOUT terraform:  } 2025-07-06 19:21:35.711074 | orchestrator | 19:21:35.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-06 19:21:35.711078 | orchestrator | 19:21:35.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-06 19:21:35.711082 | orchestrator | 19:21:35.706 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.711086 | orchestrator | 19:21:35.706 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.711089 | orchestrator | 19:21:35.706 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711093 | orchestrator | 19:21:35.706 STDOUT terraform:  + protocol = "icmp" 2025-07-06 19:21:35.711112 | orchestrator | 19:21:35.706 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711116 | orchestrator | 19:21:35.706 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.711120 | orchestrator | 19:21:35.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.711124 | orchestrator | 19:21:35.706 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.711128 | orchestrator | 19:21:35.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.711131 | orchestrator | 19:21:35.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711135 | orchestrator | 19:21:35.706 STDOUT terraform:  } 2025-07-06 19:21:35.711139 | orchestrator | 19:21:35.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-06 19:21:35.711143 | orchestrator | 19:21:35.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-06 19:21:35.711147 | orchestrator | 19:21:35.706 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.711151 | orchestrator | 19:21:35.706 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.711154 | orchestrator | 19:21:35.706 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711158 | orchestrator | 19:21:35.706 STDOUT terraform:  + protocol = "tcp" 2025-07-06 19:21:35.711162 | orchestrator | 19:21:35.706 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711165 | orchestrator | 19:21:35.706 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.711169 | orchestrator | 19:21:35.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.711173 | orchestrator | 19:21:35.706 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.711185 | orchestrator | 19:21:35.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.711189 | orchestrator | 19:21:35.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711193 | orchestrator | 19:21:35.706 STDOUT terraform:  } 2025-07-06 19:21:35.711197 | orchestrator | 19:21:35.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-06 19:21:35.711201 | orchestrator | 19:21:35.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-06 19:21:35.711205 | orchestrator | 19:21:35.707 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.711208 | orchestrator | 19:21:35.707 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.711212 | orchestrator | 19:21:35.707 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711216 | orchestrator | 19:21:35.707 STDOUT terraform:  + protocol = "udp" 2025-07-06 19:21:35.711220 | orchestrator | 19:21:35.707 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711223 | orchestrator | 19:21:35.707 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.711231 | orchestrator | 19:21:35.707 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.711234 | orchestrator | 19:21:35.707 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.711238 | orchestrator | 19:21:35.707 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.711242 | orchestrator | 19:21:35.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711246 | orchestrator | 19:21:35.707 STDOUT terraform:  } 2025-07-06 19:21:35.711249 | orchestrator | 19:21:35.707 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-06 19:21:35.711253 | orchestrator | 19:21:35.707 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-06 19:21:35.711257 | orchestrator | 19:21:35.707 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.711261 | orchestrator | 19:21:35.707 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.711265 | orchestrator | 19:21:35.707 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711269 | orchestrator | 19:21:35.707 STDOUT terraform:  + protocol = "icmp" 2025-07-06 19:21:35.711273 | orchestrator | 19:21:35.707 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711277 | orchestrator | 19:21:35.707 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.711281 | orchestrator | 19:21:35.707 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.711284 | orchestrator | 19:21:35.707 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.711288 | orchestrator | 19:21:35.707 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.711292 | orchestrator | 19:21:35.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711296 | orchestrator | 19:21:35.707 STDOUT terraform:  } 2025-07-06 19:21:35.711299 | orchestrator | 19:21:35.707 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-06 19:21:35.711303 | orchestrator | 19:21:35.707 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-06 19:21:35.711307 | orchestrator | 19:21:35.707 STDOUT terraform:  + description = "vrrp" 2025-07-06 19:21:35.711311 | orchestrator | 19:21:35.707 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:35.711337 | orchestrator | 19:21:35.707 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:35.711341 | orchestrator | 19:21:35.707 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711348 | orchestrator | 19:21:35.707 STDOUT terraform:  + protocol = "112" 2025-07-06 19:21:35.711352 | orchestrator | 19:21:35.707 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711356 | orchestrator | 19:21:35.708 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:35.711360 | orchestrator | 19:21:35.708 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:35.711364 | orchestrator | 19:21:35.708 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:35.711371 | orchestrator | 19:21:35.708 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:35.711375 | orchestrator | 19:21:35.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711378 | orchestrator | 19:21:35.708 STDOUT terraform:  } 2025-07-06 19:21:35.711382 | orchestrator | 19:21:35.708 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-06 19:21:35.711386 | orchestrator | 19:21:35.708 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-06 19:21:35.711390 | orchestrator | 19:21:35.708 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.711394 | orchestrator | 19:21:35.708 STDOUT terraform:  + description = "management security group" 2025-07-06 19:21:35.711398 | orchestrator | 19:21:35.708 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711411 | orchestrator | 19:21:35.708 STDOUT terraform:  + name = "testbed-management" 2025-07-06 19:21:35.711415 | orchestrator | 19:21:35.708 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711419 | orchestrator | 19:21:35.708 STDOUT terraform:  + stateful = (known after apply) 2025-07-06 19:21:35.711422 | orchestrator | 19:21:35.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711426 | orchestrator | 19:21:35.708 STDOUT terraform:  } 2025-07-06 19:21:35.711430 | orchestrator | 19:21:35.708 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-06 19:21:35.711436 | orchestrator | 19:21:35.708 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-06 19:21:35.711440 | orchestrator | 19:21:35.708 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.711444 | orchestrator | 19:21:35.708 STDOUT terraform:  + description = "node security group" 2025-07-06 19:21:35.711447 | orchestrator | 19:21:35.708 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711465 | orchestrator | 19:21:35.708 STDOUT terraform:  + name = "testbed-node" 2025-07-06 19:21:35.711469 | orchestrator | 19:21:35.708 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711473 | orchestrator | 19:21:35.708 STDOUT terraform:  + stateful = (known after apply) 2025-07-06 19:21:35.711477 | orchestrator | 19:21:35.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711481 | orchestrator | 19:21:35.708 STDOUT terraform:  } 2025-07-06 19:21:35.711484 | orchestrator | 19:21:35.708 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-06 19:21:35.711488 | orchestrator | 19:21:35.708 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-06 19:21:35.711492 | orchestrator | 19:21:35.708 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:35.711496 | orchestrator | 19:21:35.708 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-06 19:21:35.711500 | orchestrator | 19:21:35.708 STDOUT terraform:  + dns_nameservers = [ 2025-07-06 19:21:35.711504 | orchestrator | 19:21:35.708 STDOUT terraform:  + "8.8.8.8", 2025-07-06 19:21:35.711511 | orchestrator | 19:21:35.708 STDOUT terraform:  + "9.9.9.9", 2025-07-06 19:21:35.711515 | orchestrator | 19:21:35.708 STDOUT terraform:  ] 2025-07-06 19:21:35.711518 | orchestrator | 19:21:35.708 STDOUT terraform:  + enable_dhcp = true 2025-07-06 19:21:35.711522 | orchestrator | 19:21:35.708 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-06 19:21:35.711529 | orchestrator | 19:21:35.708 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711533 | orchestrator | 19:21:35.708 STDOUT terraform:  + ip_version = 4 2025-07-06 19:21:35.711537 | orchestrator | 19:21:35.708 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-06 19:21:35.711540 | orchestrator | 19:21:35.708 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-06 19:21:35.711544 | orchestrator | 19:21:35.709 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-06 19:21:35.711548 | orchestrator | 19:21:35.709 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:35.711551 | orchestrator | 19:21:35.709 STDOUT terraform:  + no_gateway = false 2025-07-06 19:21:35.711555 | orchestrator | 19:21:35.709 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:35.711559 | orchestrator | 19:21:35.709 STDOUT terraform:  + service_types = (known after apply) 2025-07-06 19:21:35.711563 | orchestrator | 19:21:35.709 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:35.711566 | orchestrator | 19:21:35.709 STDOUT terraform:  + allocation_pool { 2025-07-06 19:21:35.711570 | orchestrator | 19:21:35.709 STDOUT terraform:  + end = "192.168.31.250" 2025-07-06 19:21:35.711574 | orchestrator | 19:21:35.709 STDOUT terraform:  + start = "192.168.31.200" 2025-07-06 19:21:35.711577 | orchestrator | 19:21:35.709 STDOUT terraform:  } 2025-07-06 19:21:35.711581 | orchestrator | 19:21:35.709 STDOUT terraform:  } 2025-07-06 19:21:35.711585 | orchestrator | 19:21:35.709 STDOUT terraform:  # terraform_data.image will be created 2025-07-06 19:21:35.711589 | orchestrator | 19:21:35.709 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-06 19:21:35.711593 | orchestrator | 19:21:35.709 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711596 | orchestrator | 19:21:35.709 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-06 19:21:35.711600 | orchestrator | 19:21:35.709 STDOUT terraform:  + output = (known after apply) 2025-07-06 19:21:35.711604 | orchestrator | 19:21:35.709 STDOUT terraform:  } 2025-07-06 19:21:35.711610 | orchestrator | 19:21:35.709 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-06 19:21:35.711614 | orchestrator | 19:21:35.709 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-06 19:21:35.711618 | orchestrator | 19:21:35.709 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:35.711621 | orchestrator | 19:21:35.709 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-06 19:21:35.711625 | orchestrator | 19:21:35.709 STDOUT terraform:  + output = (known after apply) 2025-07-06 19:21:35.711629 | orchestrator | 19:21:35.709 STDOUT terraform:  } 2025-07-06 19:21:35.711632 | orchestrator | 19:21:35.709 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-06 19:21:35.711642 | orchestrator | 19:21:35.709 STDOUT terraform: Changes to Outputs: 2025-07-06 19:21:35.711645 | orchestrator | 19:21:35.709 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-06 19:21:35.711649 | orchestrator | 19:21:35.709 STDOUT terraform:  + private_key = (sensitive value) 2025-07-06 19:21:35.946086 | orchestrator | 19:21:35.945 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-06 19:21:35.946157 | orchestrator | 19:21:35.945 STDOUT terraform: terraform_data.image: Creating... 2025-07-06 19:21:35.946172 | orchestrator | 19:21:35.945 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=be883a41-83e6-4132-ea58-2943c170972d] 2025-07-06 19:21:35.946295 | orchestrator | 19:21:35.946 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=509171f4-0f0d-866a-5e23-f5d0df106056] 2025-07-06 19:21:35.966167 | orchestrator | 19:21:35.961 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-06 19:21:35.971056 | orchestrator | 19:21:35.970 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-06 19:21:35.975196 | orchestrator | 19:21:35.975 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-06 19:21:35.976019 | orchestrator | 19:21:35.975 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-06 19:21:35.976626 | orchestrator | 19:21:35.976 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-06 19:21:35.976721 | orchestrator | 19:21:35.976 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-06 19:21:35.977706 | orchestrator | 19:21:35.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-06 19:21:35.977976 | orchestrator | 19:21:35.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-06 19:21:35.978175 | orchestrator | 19:21:35.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-06 19:21:35.979544 | orchestrator | 19:21:35.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-06 19:21:36.426612 | orchestrator | 19:21:36.426 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-06 19:21:36.430878 | orchestrator | 19:21:36.430 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-06 19:21:36.436402 | orchestrator | 19:21:36.436 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-06 19:21:36.446178 | orchestrator | 19:21:36.445 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-06 19:21:36.487588 | orchestrator | 19:21:36.487 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-06 19:21:36.493249 | orchestrator | 19:21:36.493 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-06 19:21:41.930293 | orchestrator | 19:21:41.929 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=ad5bb73b-b6e4-4628-b827-8bb4d8511360] 2025-07-06 19:21:41.946717 | orchestrator | 19:21:41.946 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-06 19:21:45.977984 | orchestrator | 19:21:45.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-07-06 19:21:45.978182 | orchestrator | 19:21:45.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-07-06 19:21:45.978802 | orchestrator | 19:21:45.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-07-06 19:21:45.980141 | orchestrator | 19:21:45.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-07-06 19:21:45.980319 | orchestrator | 19:21:45.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-07-06 19:21:45.980518 | orchestrator | 19:21:45.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-07-06 19:21:46.437585 | orchestrator | 19:21:46.437 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-07-06 19:21:46.447734 | orchestrator | 19:21:46.447 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-07-06 19:21:46.494104 | orchestrator | 19:21:46.493 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-07-06 19:21:46.585560 | orchestrator | 19:21:46.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=825fbe01-1f52-40fd-870f-6965feac768c] 2025-07-06 19:21:46.594570 | orchestrator | 19:21:46.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-06 19:21:46.597793 | orchestrator | 19:21:46.597 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=6eb6290b-216e-4753-9f37-507fd8d1c155] 2025-07-06 19:21:46.604304 | orchestrator | 19:21:46.604 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=46febb03-7465-44d2-9b41-dd661ec3aa7d] 2025-07-06 19:21:46.607238 | orchestrator | 19:21:46.606 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-06 19:21:46.609641 | orchestrator | 19:21:46.609 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=901e3f2c-f061-4105-8266-58d4d98b5960] 2025-07-06 19:21:46.611736 | orchestrator | 19:21:46.611 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-06 19:21:46.618755 | orchestrator | 19:21:46.618 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-06 19:21:46.623390 | orchestrator | 19:21:46.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=ad2af1d2-0168-4556-9317-4e4f08581fa1] 2025-07-06 19:21:46.633558 | orchestrator | 19:21:46.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-06 19:21:46.637241 | orchestrator | 19:21:46.637 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=95e38168-1e77-4099-bfde-ad7249670c4c] 2025-07-06 19:21:46.642783 | orchestrator | 19:21:46.642 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-06 19:21:46.696512 | orchestrator | 19:21:46.696 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=951512cc-5411-4e34-a1bc-779e76dbc3d2] 2025-07-06 19:21:46.696983 | orchestrator | 19:21:46.696 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=ee53a9be-d7f6-4740-ab76-379edf2c3c5b] 2025-07-06 19:21:46.714115 | orchestrator | 19:21:46.713 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-06 19:21:46.714204 | orchestrator | 19:21:46.714 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=d394e861-9c48-44bd-b1dc-9e2695f6f7e7] 2025-07-06 19:21:46.718927 | orchestrator | 19:21:46.718 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-06 19:21:46.719926 | orchestrator | 19:21:46.719 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=24ea6f3e6fdef80c1340a44538ec7598f05f1aae] 2025-07-06 19:21:46.721480 | orchestrator | 19:21:46.721 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-06 19:21:46.725888 | orchestrator | 19:21:46.725 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=27c074f09bf673a8c9cb8379137901d35753346e] 2025-07-06 19:21:51.949385 | orchestrator | 19:21:51.949 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-07-06 19:21:52.263265 | orchestrator | 19:21:52.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=32940bce-9d30-4ec6-9fea-d63c9095158b] 2025-07-06 19:21:52.665720 | orchestrator | 19:21:52.665 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=d7ceb0ec-bfcc-47cc-9328-70353d0ac462] 2025-07-06 19:21:52.673426 | orchestrator | 19:21:52.673 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-06 19:21:56.597216 | orchestrator | 19:21:56.596 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-07-06 19:21:56.608546 | orchestrator | 19:21:56.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-07-06 19:21:56.612752 | orchestrator | 19:21:56.612 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-07-06 19:21:56.620160 | orchestrator | 19:21:56.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-07-06 19:21:56.634576 | orchestrator | 19:21:56.634 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-07-06 19:21:56.643840 | orchestrator | 19:21:56.643 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-07-06 19:21:56.961312 | orchestrator | 19:21:56.960 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=1eb046de-56ce-4fec-94aa-451822a3ca91] 2025-07-06 19:21:56.995832 | orchestrator | 19:21:56.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=01ded91f-df62-4447-a733-0e6b15acbb5e] 2025-07-06 19:21:57.025078 | orchestrator | 19:21:57.024 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=9a360e1e-d618-4e64-9063-d6a563856280] 2025-07-06 19:21:57.026587 | orchestrator | 19:21:57.026 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=63b2be34-61af-402d-bd9e-8faa5fdcd0f6] 2025-07-06 19:21:57.040907 | orchestrator | 19:21:57.040 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=ea2d9aa9-10cd-4961-88d7-4a8638c93c01] 2025-07-06 19:21:57.190814 | orchestrator | 19:21:57.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=0815eb16-c1f1-4b6f-b81a-a7126aeb6273] 2025-07-06 19:22:00.657551 | orchestrator | 19:22:00.657 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=cb52e102-8f3e-40af-9a2c-c410a4c58e7a] 2025-07-06 19:22:00.663486 | orchestrator | 19:22:00.663 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-06 19:22:00.664900 | orchestrator | 19:22:00.664 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-06 19:22:00.666599 | orchestrator | 19:22:00.666 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-06 19:22:00.863269 | orchestrator | 19:22:00.862 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4ae62ad7-361c-47c5-9024-7d80226e4b29] 2025-07-06 19:22:00.875301 | orchestrator | 19:22:00.875 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-06 19:22:00.877669 | orchestrator | 19:22:00.877 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-06 19:22:00.879195 | orchestrator | 19:22:00.878 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-06 19:22:00.883809 | orchestrator | 19:22:00.883 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-06 19:22:00.888896 | orchestrator | 19:22:00.888 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-06 19:22:00.894939 | orchestrator | 19:22:00.894 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-06 19:22:00.915218 | orchestrator | 19:22:00.914 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9bf1806b-14c9-417b-9c4e-506381e917b0] 2025-07-06 19:22:00.922309 | orchestrator | 19:22:00.921 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-06 19:22:00.922377 | orchestrator | 19:22:00.922 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-06 19:22:00.924589 | orchestrator | 19:22:00.924 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-06 19:22:01.132252 | orchestrator | 19:22:01.132 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=07777549-de6a-4ede-82cc-8460d9f34fc3] 2025-07-06 19:22:01.137378 | orchestrator | 19:22:01.137 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-06 19:22:01.204901 | orchestrator | 19:22:01.204 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=9d1665b1-55d7-4247-8c0e-8065a666851a] 2025-07-06 19:22:01.222627 | orchestrator | 19:22:01.222 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-06 19:22:01.279686 | orchestrator | 19:22:01.279 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=62cbaac7-35ad-4ae2-9b9f-fdef404870f9] 2025-07-06 19:22:01.296511 | orchestrator | 19:22:01.296 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-06 19:22:01.429213 | orchestrator | 19:22:01.428 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=21800ce0-99db-4bdf-a41a-de07f245d9e3] 2025-07-06 19:22:01.447635 | orchestrator | 19:22:01.447 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-06 19:22:01.519357 | orchestrator | 19:22:01.518 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=a076814b-67cd-4e99-b4ec-213600f57248] 2025-07-06 19:22:01.533122 | orchestrator | 19:22:01.532 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-06 19:22:01.730074 | orchestrator | 19:22:01.729 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=613fe335-6ec7-4196-8712-dd7486f5e85b] 2025-07-06 19:22:01.744103 | orchestrator | 19:22:01.743 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-06 19:22:02.096982 | orchestrator | 19:22:02.096 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=af4fc8f1-cb4d-4e39-bfd8-4bae2b58ec49] 2025-07-06 19:22:02.098959 | orchestrator | 19:22:02.098 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=ce0ddbde-0c6a-41f4-a8f1-e432bb872ed5] 2025-07-06 19:22:02.111784 | orchestrator | 19:22:02.111 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-06 19:22:02.498485 | orchestrator | 19:22:02.498 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=702c84ae-1b46-46e4-afdd-4715e971ba2e] 2025-07-06 19:22:06.700128 | orchestrator | 19:22:06.699 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=2db0ef6b-5ca5-4ab9-9dba-f779228e3459] 2025-07-06 19:22:07.152949 | orchestrator | 19:22:07.152 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=d6d3588e-a7a3-4a1c-b43b-51d528a00e47] 2025-07-06 19:22:07.623481 | orchestrator | 19:22:07.623 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=da17516d-7a6e-4a6f-8410-a7829245930c] 2025-07-06 19:22:07.741482 | orchestrator | 19:22:07.741 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=b16af999-2ab0-466c-b4b2-e55c878b7f8c] 2025-07-06 19:22:08.005094 | orchestrator | 19:22:08.004 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 7s [id=78263ed8-02b9-4128-b1a9-8d36ecf4bbf5] 2025-07-06 19:22:08.027327 | orchestrator | 19:22:08.026 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=e9e18d0e-964d-4fb2-aafd-d78fa251c8cd] 2025-07-06 19:22:08.034092 | orchestrator | 19:22:08.033 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=975dcfb3-98a8-4120-b64a-ca4915ce3701] 2025-07-06 19:22:08.588504 | orchestrator | 19:22:08.588 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=ea1b4842-64d2-4a81-a700-683aa5e49271] 2025-07-06 19:22:08.600621 | orchestrator | 19:22:08.600 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-06 19:22:08.623924 | orchestrator | 19:22:08.623 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-06 19:22:08.630684 | orchestrator | 19:22:08.630 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-06 19:22:08.639568 | orchestrator | 19:22:08.639 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-06 19:22:08.640462 | orchestrator | 19:22:08.640 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-06 19:22:08.641825 | orchestrator | 19:22:08.641 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-06 19:22:08.645761 | orchestrator | 19:22:08.645 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-06 19:22:15.025141 | orchestrator | 19:22:15.024 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=c0b9a5ce-ec77-4169-953e-a6cf7b0776b8] 2025-07-06 19:22:15.036729 | orchestrator | 19:22:15.036 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-06 19:22:15.036918 | orchestrator | 19:22:15.036 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-06 19:22:15.037204 | orchestrator | 19:22:15.037 STDOUT terraform: local_file.inventory: Creating... 2025-07-06 19:22:15.041785 | orchestrator | 19:22:15.041 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=9168f332a76c7f88349230976e749f3f5de50f87] 2025-07-06 19:22:15.042145 | orchestrator | 19:22:15.042 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=aebec7280a5523523447f6937e177df21803ce7f] 2025-07-06 19:22:15.755745 | orchestrator | 19:22:15.755 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c0b9a5ce-ec77-4169-953e-a6cf7b0776b8] 2025-07-06 19:22:18.623327 | orchestrator | 19:22:18.623 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-06 19:22:18.633671 | orchestrator | 19:22:18.633 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-06 19:22:18.644985 | orchestrator | 19:22:18.644 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-06 19:22:18.645072 | orchestrator | 19:22:18.644 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-06 19:22:18.645250 | orchestrator | 19:22:18.645 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-06 19:22:18.647189 | orchestrator | 19:22:18.646 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-06 19:22:28.624497 | orchestrator | 19:22:28.624 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-06 19:22:28.633877 | orchestrator | 19:22:28.633 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-06 19:22:28.645951 | orchestrator | 19:22:28.645 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-06 19:22:28.646087 | orchestrator | 19:22:28.645 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-06 19:22:28.646728 | orchestrator | 19:22:28.645 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-06 19:22:28.647196 | orchestrator | 19:22:28.646 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-06 19:22:29.180564 | orchestrator | 19:22:29.180 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=7c1b8d17-0b42-41e9-91c3-c76d2fb6110a] 2025-07-06 19:22:29.194461 | orchestrator | 19:22:29.194 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=9290bf60-fd28-4d55-bf4d-a3f1efe829f9] 2025-07-06 19:22:38.629328 | orchestrator | 19:22:38.628 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-06 19:22:38.634732 | orchestrator | 19:22:38.634 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-06 19:22:38.646890 | orchestrator | 19:22:38.646 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-06 19:22:38.647855 | orchestrator | 19:22:38.647 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-06 19:22:39.155826 | orchestrator | 19:22:39.155 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=952644bf-0fb2-4676-8fee-470057a5f18f] 2025-07-06 19:22:39.185067 | orchestrator | 19:22:39.184 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=eab3795c-68ce-4106-bd01-25a86bfe7394] 2025-07-06 19:22:39.275877 | orchestrator | 19:22:39.275 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=2f48cd97-659f-4667-a86a-a4170b4b3b8c] 2025-07-06 19:22:39.292696 | orchestrator | 19:22:39.290 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=8d8e857d-ba48-4e6c-8d3b-269b26fef48b] 2025-07-06 19:22:39.321453 | orchestrator | 19:22:39.321 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-06 19:22:39.321564 | orchestrator | 19:22:39.321 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-06 19:22:39.322809 | orchestrator | 19:22:39.322 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-06 19:22:39.323572 | orchestrator | 19:22:39.323 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-06 19:22:39.336920 | orchestrator | 19:22:39.336 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=285620251310561865] 2025-07-06 19:22:39.337714 | orchestrator | 19:22:39.337 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-06 19:22:39.339502 | orchestrator | 19:22:39.338 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-06 19:22:39.339693 | orchestrator | 19:22:39.339 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-06 19:22:39.339770 | orchestrator | 19:22:39.339 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-06 19:22:39.339840 | orchestrator | 19:22:39.339 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-06 19:22:39.343730 | orchestrator | 19:22:39.343 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-06 19:22:39.354995 | orchestrator | 19:22:39.354 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-06 19:22:44.677418 | orchestrator | 19:22:44.676 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=9290bf60-fd28-4d55-bf4d-a3f1efe829f9/825fbe01-1f52-40fd-870f-6965feac768c] 2025-07-06 19:22:44.688269 | orchestrator | 19:22:44.687 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=8d8e857d-ba48-4e6c-8d3b-269b26fef48b/ad2af1d2-0168-4556-9317-4e4f08581fa1] 2025-07-06 19:22:44.708411 | orchestrator | 19:22:44.707 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=2f48cd97-659f-4667-a86a-a4170b4b3b8c/6eb6290b-216e-4753-9f37-507fd8d1c155] 2025-07-06 19:22:44.719670 | orchestrator | 19:22:44.719 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=9290bf60-fd28-4d55-bf4d-a3f1efe829f9/ee53a9be-d7f6-4740-ab76-379edf2c3c5b] 2025-07-06 19:22:44.738646 | orchestrator | 19:22:44.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=8d8e857d-ba48-4e6c-8d3b-269b26fef48b/46febb03-7465-44d2-9b41-dd661ec3aa7d] 2025-07-06 19:22:44.751050 | orchestrator | 19:22:44.750 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=2f48cd97-659f-4667-a86a-a4170b4b3b8c/951512cc-5411-4e34-a1bc-779e76dbc3d2] 2025-07-06 19:22:44.767227 | orchestrator | 19:22:44.766 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=9290bf60-fd28-4d55-bf4d-a3f1efe829f9/d394e861-9c48-44bd-b1dc-9e2695f6f7e7] 2025-07-06 19:22:44.803221 | orchestrator | 19:22:44.802 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=8d8e857d-ba48-4e6c-8d3b-269b26fef48b/901e3f2c-f061-4105-8266-58d4d98b5960] 2025-07-06 19:22:44.837045 | orchestrator | 19:22:44.836 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=2f48cd97-659f-4667-a86a-a4170b4b3b8c/95e38168-1e77-4099-bfde-ad7249670c4c] 2025-07-06 19:22:49.356847 | orchestrator | 19:22:49.356 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-06 19:22:59.358630 | orchestrator | 19:22:59.358 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-06 19:23:00.336733 | orchestrator | 19:23:00.336 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=7409eed9-796f-43d3-8fb5-5b18ad3457d6] 2025-07-06 19:23:00.355445 | orchestrator | 19:23:00.355 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-06 19:23:00.355547 | orchestrator | 19:23:00.355 STDOUT terraform: Outputs: 2025-07-06 19:23:00.355565 | orchestrator | 19:23:00.355 STDOUT terraform: manager_address = 2025-07-06 19:23:00.355596 | orchestrator | 19:23:00.355 STDOUT terraform: private_key = 2025-07-06 19:23:00.602361 | orchestrator | ok: Runtime: 0:01:33.422221 2025-07-06 19:23:00.639172 | 2025-07-06 19:23:00.639305 | TASK [Fetch manager address] 2025-07-06 19:23:01.086361 | orchestrator | ok 2025-07-06 19:23:01.097953 | 2025-07-06 19:23:01.098089 | TASK [Set manager_host address] 2025-07-06 19:23:01.178810 | orchestrator | ok 2025-07-06 19:23:01.188235 | 2025-07-06 19:23:01.188365 | LOOP [Update ansible collections] 2025-07-06 19:23:02.041405 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:23:02.041777 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-06 19:23:02.041835 | orchestrator | Starting galaxy collection install process 2025-07-06 19:23:02.041875 | orchestrator | Process install dependency map 2025-07-06 19:23:02.041912 | orchestrator | Starting collection install process 2025-07-06 19:23:02.041944 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-07-06 19:23:02.041982 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-07-06 19:23:02.042023 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-06 19:23:02.042107 | orchestrator | ok: Item: commons Runtime: 0:00:00.535496 2025-07-06 19:23:02.863721 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-06 19:23:02.863902 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:23:02.863966 | orchestrator | Starting galaxy collection install process 2025-07-06 19:23:02.864015 | orchestrator | Process install dependency map 2025-07-06 19:23:02.864058 | orchestrator | Starting collection install process 2025-07-06 19:23:02.864099 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-07-06 19:23:02.864141 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-07-06 19:23:02.864181 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-06 19:23:02.864240 | orchestrator | ok: Item: services Runtime: 0:00:00.566188 2025-07-06 19:23:02.882487 | 2025-07-06 19:23:02.882662 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-06 19:23:13.430527 | orchestrator | ok 2025-07-06 19:23:13.441495 | 2025-07-06 19:23:13.441639 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-06 19:24:13.483773 | orchestrator | ok 2025-07-06 19:24:13.492188 | 2025-07-06 19:24:13.492310 | TASK [Fetch manager ssh hostkey] 2025-07-06 19:24:15.071284 | orchestrator | Output suppressed because no_log was given 2025-07-06 19:24:15.082065 | 2025-07-06 19:24:15.082216 | TASK [Get ssh keypair from terraform environment] 2025-07-06 19:24:15.616198 | orchestrator | ok: Runtime: 0:00:00.010544 2025-07-06 19:24:15.624283 | 2025-07-06 19:24:15.624406 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-06 19:24:15.654011 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-06 19:24:15.661160 | 2025-07-06 19:24:15.661273 | TASK [Run manager part 0] 2025-07-06 19:24:16.544051 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:24:16.592001 | orchestrator | 2025-07-06 19:24:16.592050 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-06 19:24:16.592057 | orchestrator | 2025-07-06 19:24:16.592070 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-06 19:24:18.336249 | orchestrator | ok: [testbed-manager] 2025-07-06 19:24:18.336302 | orchestrator | 2025-07-06 19:24:18.336325 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-06 19:24:18.336336 | orchestrator | 2025-07-06 19:24:18.336347 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:24:20.199026 | orchestrator | ok: [testbed-manager] 2025-07-06 19:24:20.199205 | orchestrator | 2025-07-06 19:24:20.199226 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-06 19:24:20.931470 | orchestrator | ok: [testbed-manager] 2025-07-06 19:24:20.931594 | orchestrator | 2025-07-06 19:24:20.931613 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-06 19:24:20.977319 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:20.977364 | orchestrator | 2025-07-06 19:24:20.977373 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-06 19:24:21.015146 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:21.015201 | orchestrator | 2025-07-06 19:24:21.015210 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-06 19:24:21.043915 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:21.043965 | orchestrator | 2025-07-06 19:24:21.043973 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-06 19:24:21.075095 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:21.075150 | orchestrator | 2025-07-06 19:24:21.075158 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-06 19:24:21.103709 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:21.103766 | orchestrator | 2025-07-06 19:24:21.103779 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-06 19:24:21.131650 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:21.131701 | orchestrator | 2025-07-06 19:24:21.131713 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-06 19:24:21.158101 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:21.158148 | orchestrator | 2025-07-06 19:24:21.158156 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-06 19:24:22.053209 | orchestrator | changed: [testbed-manager] 2025-07-06 19:24:22.053265 | orchestrator | 2025-07-06 19:24:22.053274 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-06 19:27:41.195272 | orchestrator | changed: [testbed-manager] 2025-07-06 19:27:41.195333 | orchestrator | 2025-07-06 19:27:41.195344 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-06 19:29:21.637610 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:21.637723 | orchestrator | 2025-07-06 19:29:21.637732 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-06 19:29:43.813624 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:43.813673 | orchestrator | 2025-07-06 19:29:43.813682 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-06 19:29:53.036960 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:53.037044 | orchestrator | 2025-07-06 19:29:53.037054 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-06 19:29:53.081359 | orchestrator | ok: [testbed-manager] 2025-07-06 19:29:53.081396 | orchestrator | 2025-07-06 19:29:53.081403 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-06 19:29:53.875385 | orchestrator | ok: [testbed-manager] 2025-07-06 19:29:53.875477 | orchestrator | 2025-07-06 19:29:53.875494 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-06 19:29:54.762731 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:54.762775 | orchestrator | 2025-07-06 19:29:54.762784 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-06 19:30:01.257436 | orchestrator | changed: [testbed-manager] 2025-07-06 19:30:01.257563 | orchestrator | 2025-07-06 19:30:01.257614 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-06 19:30:07.210774 | orchestrator | changed: [testbed-manager] 2025-07-06 19:30:07.210813 | orchestrator | 2025-07-06 19:30:07.210824 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-06 19:30:10.023476 | orchestrator | changed: [testbed-manager] 2025-07-06 19:30:10.023557 | orchestrator | 2025-07-06 19:30:10.023573 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-06 19:30:11.784726 | orchestrator | changed: [testbed-manager] 2025-07-06 19:30:11.784802 | orchestrator | 2025-07-06 19:30:11.784817 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-06 19:30:12.886812 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-06 19:30:12.886884 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-06 19:30:12.886895 | orchestrator | 2025-07-06 19:30:12.886905 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-06 19:30:12.929253 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-06 19:30:12.929301 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-06 19:30:12.929307 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-06 19:30:12.929312 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-06 19:30:16.197706 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-06 19:30:16.197798 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-06 19:30:16.197812 | orchestrator | 2025-07-06 19:30:16.197825 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-06 19:30:16.750736 | orchestrator | changed: [testbed-manager] 2025-07-06 19:30:16.750833 | orchestrator | 2025-07-06 19:30:16.750850 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-06 19:31:33.073331 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-06 19:31:33.073526 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-06 19:31:33.073547 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-06 19:31:33.073560 | orchestrator | 2025-07-06 19:31:33.073573 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-06 19:31:35.370957 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-06 19:31:35.371056 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-06 19:31:35.371080 | orchestrator | 2025-07-06 19:31:35.371101 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-06 19:31:35.371122 | orchestrator | 2025-07-06 19:31:35.371143 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:31:36.803689 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:36.803725 | orchestrator | 2025-07-06 19:31:36.803732 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-06 19:31:36.863998 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:36.864039 | orchestrator | 2025-07-06 19:31:36.864047 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-06 19:31:36.941998 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:36.942066 | orchestrator | 2025-07-06 19:31:36.942076 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-06 19:31:37.728830 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:37.728921 | orchestrator | 2025-07-06 19:31:37.728937 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-06 19:31:38.542728 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:38.543762 | orchestrator | 2025-07-06 19:31:38.543801 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-06 19:31:39.956475 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-06 19:31:39.956574 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-06 19:31:39.956590 | orchestrator | 2025-07-06 19:31:39.956620 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-06 19:31:41.363860 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:41.365390 | orchestrator | 2025-07-06 19:31:41.365413 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-06 19:31:43.200664 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:31:43.200711 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-06 19:31:43.200719 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:31:43.200726 | orchestrator | 2025-07-06 19:31:43.200734 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-06 19:31:43.254686 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:43.254773 | orchestrator | 2025-07-06 19:31:43.254782 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-06 19:31:43.826717 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:43.827331 | orchestrator | 2025-07-06 19:31:43.827356 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-06 19:31:43.894664 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:43.894735 | orchestrator | 2025-07-06 19:31:43.894752 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-06 19:31:44.775048 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:31:44.775104 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:44.775110 | orchestrator | 2025-07-06 19:31:44.775115 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-06 19:31:44.806110 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:44.806339 | orchestrator | 2025-07-06 19:31:44.806358 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-06 19:31:44.833477 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:44.833535 | orchestrator | 2025-07-06 19:31:44.833542 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-06 19:31:44.875097 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:44.875148 | orchestrator | 2025-07-06 19:31:44.875156 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-06 19:31:44.932729 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:44.932792 | orchestrator | 2025-07-06 19:31:44.932804 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-06 19:31:45.673763 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:45.673812 | orchestrator | 2025-07-06 19:31:45.673819 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-06 19:31:45.673824 | orchestrator | 2025-07-06 19:31:45.673828 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:31:47.076353 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:47.076430 | orchestrator | 2025-07-06 19:31:47.076442 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-06 19:31:47.995813 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:47.995897 | orchestrator | 2025-07-06 19:31:47.995913 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:31:47.995925 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-06 19:31:47.995937 | orchestrator | 2025-07-06 19:31:48.485701 | orchestrator | ok: Runtime: 0:07:32.154770 2025-07-06 19:31:48.494889 | 2025-07-06 19:31:48.494999 | TASK [Point out that the log in on the manager is now possible] 2025-07-06 19:31:48.526949 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-06 19:31:48.534157 | 2025-07-06 19:31:48.534263 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-06 19:31:48.578581 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-06 19:31:48.593401 | 2025-07-06 19:31:48.593779 | TASK [Run manager part 1 + 2] 2025-07-06 19:31:49.472002 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:31:49.527380 | orchestrator | 2025-07-06 19:31:49.527431 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-06 19:31:49.527438 | orchestrator | 2025-07-06 19:31:49.527451 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:31:52.515491 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:52.515543 | orchestrator | 2025-07-06 19:31:52.515563 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-06 19:31:52.559662 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:52.559713 | orchestrator | 2025-07-06 19:31:52.559721 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-06 19:31:52.599712 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:52.599769 | orchestrator | 2025-07-06 19:31:52.599779 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-06 19:31:52.640094 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:52.640149 | orchestrator | 2025-07-06 19:31:52.640159 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-06 19:31:52.707353 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:52.707411 | orchestrator | 2025-07-06 19:31:52.707423 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-06 19:31:52.767541 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:52.767598 | orchestrator | 2025-07-06 19:31:52.767609 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-06 19:31:52.812924 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-06 19:31:52.812970 | orchestrator | 2025-07-06 19:31:52.812976 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-06 19:31:53.597671 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:53.597728 | orchestrator | 2025-07-06 19:31:53.597737 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-06 19:31:53.647088 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:31:53.647205 | orchestrator | 2025-07-06 19:31:53.647217 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-06 19:31:55.066409 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:55.066474 | orchestrator | 2025-07-06 19:31:55.066489 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-06 19:31:55.663451 | orchestrator | ok: [testbed-manager] 2025-07-06 19:31:55.663510 | orchestrator | 2025-07-06 19:31:55.663516 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-06 19:31:56.803081 | orchestrator | changed: [testbed-manager] 2025-07-06 19:31:56.803173 | orchestrator | 2025-07-06 19:31:56.803193 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-06 19:32:09.463700 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:09.463798 | orchestrator | 2025-07-06 19:32:09.463815 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-06 19:32:10.136822 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:10.137009 | orchestrator | 2025-07-06 19:32:10.137027 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-06 19:32:10.191836 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:10.191894 | orchestrator | 2025-07-06 19:32:10.191901 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-06 19:32:11.162800 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:11.162889 | orchestrator | 2025-07-06 19:32:11.162906 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-06 19:32:12.089089 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:12.089161 | orchestrator | 2025-07-06 19:32:12.089173 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-06 19:32:12.674621 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:12.674699 | orchestrator | 2025-07-06 19:32:12.674720 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-06 19:32:12.720112 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-06 19:32:12.720260 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-06 19:32:12.720280 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-06 19:32:12.720293 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-06 19:32:14.910045 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:14.910092 | orchestrator | 2025-07-06 19:32:14.910100 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-06 19:32:23.826438 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-06 19:32:23.826536 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-06 19:32:23.826553 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-06 19:32:23.826566 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-06 19:32:23.826587 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-06 19:32:23.826598 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-06 19:32:23.826610 | orchestrator | 2025-07-06 19:32:23.826623 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-06 19:32:24.861264 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:24.861373 | orchestrator | 2025-07-06 19:32:24.861437 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-06 19:32:24.905543 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:24.905611 | orchestrator | 2025-07-06 19:32:24.905620 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-06 19:32:28.028516 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:28.028606 | orchestrator | 2025-07-06 19:32:28.028624 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-06 19:32:28.068557 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:28.068637 | orchestrator | 2025-07-06 19:32:28.068654 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-06 19:34:06.011819 | orchestrator | changed: [testbed-manager] 2025-07-06 19:34:06.011862 | orchestrator | 2025-07-06 19:34:06.011871 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-06 19:34:07.147246 | orchestrator | ok: [testbed-manager] 2025-07-06 19:34:07.147287 | orchestrator | 2025-07-06 19:34:07.147294 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:34:07.147302 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-06 19:34:07.147307 | orchestrator | 2025-07-06 19:34:07.719212 | orchestrator | ok: Runtime: 0:02:18.343924 2025-07-06 19:34:07.727943 | 2025-07-06 19:34:07.728060 | TASK [Reboot manager] 2025-07-06 19:34:09.265023 | orchestrator | ok: Runtime: 0:00:00.943061 2025-07-06 19:34:09.282026 | 2025-07-06 19:34:09.282212 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-06 19:34:23.439720 | orchestrator | ok 2025-07-06 19:34:23.449492 | 2025-07-06 19:34:23.449640 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-06 19:35:23.507690 | orchestrator | ok 2025-07-06 19:35:23.520010 | 2025-07-06 19:35:23.520142 | TASK [Deploy manager + bootstrap nodes] 2025-07-06 19:35:26.030250 | orchestrator | 2025-07-06 19:35:26.030434 | orchestrator | # DEPLOY MANAGER 2025-07-06 19:35:26.030457 | orchestrator | 2025-07-06 19:35:26.030473 | orchestrator | + set -e 2025-07-06 19:35:26.030487 | orchestrator | + echo 2025-07-06 19:35:26.030503 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-06 19:35:26.030520 | orchestrator | + echo 2025-07-06 19:35:26.030581 | orchestrator | + cat /opt/manager-vars.sh 2025-07-06 19:35:26.034548 | orchestrator | export NUMBER_OF_NODES=6 2025-07-06 19:35:26.034603 | orchestrator | 2025-07-06 19:35:26.034617 | orchestrator | export CEPH_VERSION=reef 2025-07-06 19:35:26.034630 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-06 19:35:26.034643 | orchestrator | export MANAGER_VERSION=9.1.0 2025-07-06 19:35:26.034669 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-06 19:35:26.034680 | orchestrator | 2025-07-06 19:35:26.034699 | orchestrator | export ARA=false 2025-07-06 19:35:26.034710 | orchestrator | export DEPLOY_MODE=manager 2025-07-06 19:35:26.034728 | orchestrator | export TEMPEST=false 2025-07-06 19:35:26.034739 | orchestrator | export IS_ZUUL=true 2025-07-06 19:35:26.034780 | orchestrator | 2025-07-06 19:35:26.034808 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:35:26.034827 | orchestrator | export EXTERNAL_API=false 2025-07-06 19:35:26.034844 | orchestrator | 2025-07-06 19:35:26.034855 | orchestrator | export IMAGE_USER=ubuntu 2025-07-06 19:35:26.034869 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-06 19:35:26.034880 | orchestrator | 2025-07-06 19:35:26.034891 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-06 19:35:26.034966 | orchestrator | 2025-07-06 19:35:26.034980 | orchestrator | + echo 2025-07-06 19:35:26.034993 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 19:35:26.036318 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 19:35:26.036339 | orchestrator | ++ INTERACTIVE=false 2025-07-06 19:35:26.036352 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 19:35:26.036364 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 19:35:26.036595 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 19:35:26.036610 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 19:35:26.036622 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 19:35:26.036633 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 19:35:26.036789 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 19:35:26.036808 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 19:35:26.036820 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 19:35:26.036872 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 19:35:26.036885 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 19:35:26.036950 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 19:35:26.036973 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 19:35:26.036988 | orchestrator | ++ export ARA=false 2025-07-06 19:35:26.037000 | orchestrator | ++ ARA=false 2025-07-06 19:35:26.037011 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 19:35:26.037022 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 19:35:26.037036 | orchestrator | ++ export TEMPEST=false 2025-07-06 19:35:26.037047 | orchestrator | ++ TEMPEST=false 2025-07-06 19:35:26.037111 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 19:35:26.037124 | orchestrator | ++ IS_ZUUL=true 2025-07-06 19:35:26.037139 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:35:26.037150 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:35:26.037161 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 19:35:26.037176 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 19:35:26.037186 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 19:35:26.037198 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 19:35:26.037226 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 19:35:26.037238 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 19:35:26.037298 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 19:35:26.037321 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 19:35:26.037435 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-06 19:35:26.107060 | orchestrator | + docker version 2025-07-06 19:35:26.382945 | orchestrator | Client: Docker Engine - Community 2025-07-06 19:35:26.383058 | orchestrator | Version: 27.5.1 2025-07-06 19:35:26.383074 | orchestrator | API version: 1.47 2025-07-06 19:35:26.383085 | orchestrator | Go version: go1.22.11 2025-07-06 19:35:26.383095 | orchestrator | Git commit: 9f9e405 2025-07-06 19:35:26.383105 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-06 19:35:26.383117 | orchestrator | OS/Arch: linux/amd64 2025-07-06 19:35:26.383126 | orchestrator | Context: default 2025-07-06 19:35:26.383136 | orchestrator | 2025-07-06 19:35:26.383146 | orchestrator | Server: Docker Engine - Community 2025-07-06 19:35:26.383156 | orchestrator | Engine: 2025-07-06 19:35:26.383166 | orchestrator | Version: 27.5.1 2025-07-06 19:35:26.383176 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-06 19:35:26.383218 | orchestrator | Go version: go1.22.11 2025-07-06 19:35:26.383228 | orchestrator | Git commit: 4c9b3b0 2025-07-06 19:35:26.383238 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-06 19:35:26.383247 | orchestrator | OS/Arch: linux/amd64 2025-07-06 19:35:26.383257 | orchestrator | Experimental: false 2025-07-06 19:35:26.383266 | orchestrator | containerd: 2025-07-06 19:35:26.383276 | orchestrator | Version: 1.7.27 2025-07-06 19:35:26.383285 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-06 19:35:26.383296 | orchestrator | runc: 2025-07-06 19:35:26.383305 | orchestrator | Version: 1.2.5 2025-07-06 19:35:26.383315 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-06 19:35:26.383324 | orchestrator | docker-init: 2025-07-06 19:35:26.383334 | orchestrator | Version: 0.19.0 2025-07-06 19:35:26.383346 | orchestrator | GitCommit: de40ad0 2025-07-06 19:35:26.387818 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-06 19:35:26.399533 | orchestrator | + set -e 2025-07-06 19:35:26.399592 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 19:35:26.399603 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 19:35:26.399614 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 19:35:26.399625 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 19:35:26.399636 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 19:35:26.399647 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 19:35:26.399658 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 19:35:26.399669 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 19:35:26.399680 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 19:35:26.399691 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 19:35:26.399701 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 19:35:26.399712 | orchestrator | ++ export ARA=false 2025-07-06 19:35:26.399723 | orchestrator | ++ ARA=false 2025-07-06 19:35:26.399734 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 19:35:26.399745 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 19:35:26.399773 | orchestrator | ++ export TEMPEST=false 2025-07-06 19:35:26.399785 | orchestrator | ++ TEMPEST=false 2025-07-06 19:35:26.399795 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 19:35:26.399806 | orchestrator | ++ IS_ZUUL=true 2025-07-06 19:35:26.399817 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:35:26.399828 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:35:26.399838 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 19:35:26.399849 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 19:35:26.399860 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 19:35:26.399870 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 19:35:26.399881 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 19:35:26.399892 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 19:35:26.399912 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 19:35:26.399923 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 19:35:26.399934 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 19:35:26.399945 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 19:35:26.399955 | orchestrator | ++ INTERACTIVE=false 2025-07-06 19:35:26.399966 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 19:35:26.399981 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 19:35:26.399993 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-07-06 19:35:26.400003 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-07-06 19:35:26.407788 | orchestrator | + set -e 2025-07-06 19:35:26.408297 | orchestrator | + VERSION=9.1.0 2025-07-06 19:35:26.408317 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:35:26.418192 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-07-06 19:35:26.418355 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:35:26.422932 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:35:26.428340 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-07-06 19:35:26.436530 | orchestrator | /opt/configuration ~ 2025-07-06 19:35:26.436626 | orchestrator | + set -e 2025-07-06 19:35:26.436642 | orchestrator | + pushd /opt/configuration 2025-07-06 19:35:26.436654 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-06 19:35:26.439668 | orchestrator | + source /opt/venv/bin/activate 2025-07-06 19:35:26.440719 | orchestrator | ++ deactivate nondestructive 2025-07-06 19:35:26.440785 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:26.440836 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:26.440885 | orchestrator | ++ hash -r 2025-07-06 19:35:26.441090 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:26.441108 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-06 19:35:26.441119 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-06 19:35:26.441145 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-06 19:35:26.441158 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-06 19:35:26.441169 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-06 19:35:26.441180 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-06 19:35:26.441191 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-06 19:35:26.441202 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:35:26.441214 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:35:26.441225 | orchestrator | ++ export PATH 2025-07-06 19:35:26.441236 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:26.441247 | orchestrator | ++ '[' -z '' ']' 2025-07-06 19:35:26.441258 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-06 19:35:26.441268 | orchestrator | ++ PS1='(venv) ' 2025-07-06 19:35:26.441279 | orchestrator | ++ export PS1 2025-07-06 19:35:26.441290 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-06 19:35:26.441300 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-06 19:35:26.441311 | orchestrator | ++ hash -r 2025-07-06 19:35:26.441409 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-07-06 19:35:27.519143 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-07-06 19:35:27.520085 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-07-06 19:35:27.521497 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-07-06 19:35:27.522968 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-07-06 19:35:27.524008 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-07-06 19:35:27.534220 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-07-06 19:35:27.535585 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-07-06 19:35:27.536734 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-07-06 19:35:27.538181 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-07-06 19:35:27.577466 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-07-06 19:35:27.579750 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-07-06 19:35:27.581327 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-07-06 19:35:27.582872 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.6.15) 2025-07-06 19:35:27.586715 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-07-06 19:35:27.808394 | orchestrator | ++ which gilt 2025-07-06 19:35:27.812612 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-07-06 19:35:27.812691 | orchestrator | + /opt/venv/bin/gilt overlay 2025-07-06 19:35:28.057277 | orchestrator | osism.cfg-generics: 2025-07-06 19:35:28.222521 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-07-06 19:35:28.222624 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-07-06 19:35:28.222639 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-07-06 19:35:28.222653 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-07-06 19:35:28.994172 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-07-06 19:35:29.005064 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-07-06 19:35:29.314638 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-07-06 19:35:29.373345 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-06 19:35:29.373435 | orchestrator | + deactivate 2025-07-06 19:35:29.373449 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-06 19:35:29.373461 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:35:29.373471 | orchestrator | + export PATH 2025-07-06 19:35:29.373481 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-06 19:35:29.373491 | orchestrator | + '[' -n '' ']' 2025-07-06 19:35:29.373503 | orchestrator | + hash -r 2025-07-06 19:35:29.373512 | orchestrator | + '[' -n '' ']' 2025-07-06 19:35:29.373522 | orchestrator | + unset VIRTUAL_ENV 2025-07-06 19:35:29.373532 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-06 19:35:29.373542 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-06 19:35:29.373551 | orchestrator | ~ 2025-07-06 19:35:29.373561 | orchestrator | + unset -f deactivate 2025-07-06 19:35:29.373571 | orchestrator | + popd 2025-07-06 19:35:29.375675 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-07-06 19:35:29.375726 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-06 19:35:29.376488 | orchestrator | ++ semver 9.1.0 7.0.0 2025-07-06 19:35:29.436941 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-06 19:35:29.437035 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-06 19:35:29.437050 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-06 19:35:29.537136 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-06 19:35:29.537242 | orchestrator | + source /opt/venv/bin/activate 2025-07-06 19:35:29.537257 | orchestrator | ++ deactivate nondestructive 2025-07-06 19:35:29.537268 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:29.537280 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:29.537291 | orchestrator | ++ hash -r 2025-07-06 19:35:29.537312 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:29.537323 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-06 19:35:29.537334 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-06 19:35:29.537358 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-06 19:35:29.537371 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-06 19:35:29.537382 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-06 19:35:29.537393 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-06 19:35:29.537404 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-06 19:35:29.537415 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:35:29.537427 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:35:29.537587 | orchestrator | ++ export PATH 2025-07-06 19:35:29.537607 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:35:29.537618 | orchestrator | ++ '[' -z '' ']' 2025-07-06 19:35:29.537629 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-06 19:35:29.537640 | orchestrator | ++ PS1='(venv) ' 2025-07-06 19:35:29.537650 | orchestrator | ++ export PS1 2025-07-06 19:35:29.537661 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-06 19:35:29.537672 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-06 19:35:29.537683 | orchestrator | ++ hash -r 2025-07-06 19:35:29.537694 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-06 19:35:30.613726 | orchestrator | 2025-07-06 19:35:30.613898 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-06 19:35:30.613916 | orchestrator | 2025-07-06 19:35:30.613930 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-06 19:35:31.186848 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:31.186984 | orchestrator | 2025-07-06 19:35:31.187005 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-06 19:35:32.185563 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:32.185667 | orchestrator | 2025-07-06 19:35:32.185684 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-06 19:35:32.185697 | orchestrator | 2025-07-06 19:35:32.185709 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:35:34.549933 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:34.550103 | orchestrator | 2025-07-06 19:35:34.550123 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-06 19:35:34.607599 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:34.607709 | orchestrator | 2025-07-06 19:35:34.607726 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-06 19:35:35.047618 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:35.047720 | orchestrator | 2025-07-06 19:35:35.047739 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-06 19:35:35.081070 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:35:35.081149 | orchestrator | 2025-07-06 19:35:35.081163 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-06 19:35:35.423857 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:35.423946 | orchestrator | 2025-07-06 19:35:35.423958 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-06 19:35:35.481265 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:35:35.481361 | orchestrator | 2025-07-06 19:35:35.481376 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-06 19:35:35.819828 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:35.819941 | orchestrator | 2025-07-06 19:35:35.819957 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-06 19:35:35.937591 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:35:35.937682 | orchestrator | 2025-07-06 19:35:35.937697 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-06 19:35:35.937710 | orchestrator | 2025-07-06 19:35:35.937722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:35:37.770319 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:37.770424 | orchestrator | 2025-07-06 19:35:37.770441 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-06 19:35:37.871200 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-06 19:35:37.871301 | orchestrator | 2025-07-06 19:35:37.871316 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-06 19:35:37.926293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-06 19:35:37.926393 | orchestrator | 2025-07-06 19:35:37.926408 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-06 19:35:39.073475 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-06 19:35:39.073580 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-06 19:35:39.073599 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-06 19:35:39.073611 | orchestrator | 2025-07-06 19:35:39.073623 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-06 19:35:40.850559 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-06 19:35:40.850693 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-06 19:35:40.850720 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-06 19:35:40.850741 | orchestrator | 2025-07-06 19:35:40.850762 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-06 19:35:41.490643 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:35:41.490752 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:41.490825 | orchestrator | 2025-07-06 19:35:41.490845 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-06 19:35:42.135915 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:35:42.136024 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:42.136040 | orchestrator | 2025-07-06 19:35:42.136053 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-06 19:35:42.191397 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:35:42.191494 | orchestrator | 2025-07-06 19:35:42.191508 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-06 19:35:42.548731 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:42.548881 | orchestrator | 2025-07-06 19:35:42.548910 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-06 19:35:42.639350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-06 19:35:42.639454 | orchestrator | 2025-07-06 19:35:42.639471 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-06 19:35:43.751096 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:43.751180 | orchestrator | 2025-07-06 19:35:43.751192 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-06 19:35:44.552380 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:44.552480 | orchestrator | 2025-07-06 19:35:44.552496 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-06 19:35:56.218100 | orchestrator | changed: [testbed-manager] 2025-07-06 19:35:56.218213 | orchestrator | 2025-07-06 19:35:56.218248 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-06 19:35:56.277141 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:35:56.277269 | orchestrator | 2025-07-06 19:35:56.277294 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-06 19:35:56.277315 | orchestrator | 2025-07-06 19:35:56.277334 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:35:58.144236 | orchestrator | ok: [testbed-manager] 2025-07-06 19:35:58.144356 | orchestrator | 2025-07-06 19:35:58.144372 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-06 19:35:58.249672 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-06 19:35:58.249777 | orchestrator | 2025-07-06 19:35:58.249794 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-06 19:35:58.305626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 19:35:58.305744 | orchestrator | 2025-07-06 19:35:58.305762 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-06 19:36:00.824284 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:00.824409 | orchestrator | 2025-07-06 19:36:00.824432 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-06 19:36:00.881658 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:00.881762 | orchestrator | 2025-07-06 19:36:00.881777 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-06 19:36:01.014805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-06 19:36:01.014962 | orchestrator | 2025-07-06 19:36:01.014989 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-06 19:36:03.816411 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-06 19:36:03.816526 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-06 19:36:03.816542 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-06 19:36:03.816555 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-06 19:36:03.816567 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-06 19:36:03.816579 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-06 19:36:03.816590 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-06 19:36:03.816602 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-06 19:36:03.816613 | orchestrator | 2025-07-06 19:36:03.816629 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-06 19:36:04.484112 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:04.484218 | orchestrator | 2025-07-06 19:36:04.484236 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-06 19:36:05.119549 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:05.119657 | orchestrator | 2025-07-06 19:36:05.119673 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-06 19:36:05.188618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-06 19:36:05.188724 | orchestrator | 2025-07-06 19:36:05.188738 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-06 19:36:06.418585 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-06 19:36:06.418709 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-06 19:36:06.418724 | orchestrator | 2025-07-06 19:36:06.418738 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-06 19:36:07.062753 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:07.062910 | orchestrator | 2025-07-06 19:36:07.062928 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-06 19:36:07.130236 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:07.130336 | orchestrator | 2025-07-06 19:36:07.130350 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-06 19:36:07.190982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-06 19:36:07.191084 | orchestrator | 2025-07-06 19:36:07.191099 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-06 19:36:08.565610 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:36:08.565717 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:36:08.565732 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:08.565745 | orchestrator | 2025-07-06 19:36:08.565757 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-06 19:36:09.223477 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:09.223590 | orchestrator | 2025-07-06 19:36:09.223607 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-06 19:36:09.285180 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:09.285278 | orchestrator | 2025-07-06 19:36:09.285294 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-06 19:36:09.396598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-06 19:36:09.396700 | orchestrator | 2025-07-06 19:36:09.396715 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-06 19:36:09.914000 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:09.914159 | orchestrator | 2025-07-06 19:36:09.914178 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-06 19:36:12.334769 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:12.334982 | orchestrator | 2025-07-06 19:36:12.335009 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-06 19:36:13.612539 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-06 19:36:13.612643 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-06 19:36:13.612659 | orchestrator | 2025-07-06 19:36:13.612672 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-06 19:36:14.262329 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:14.262465 | orchestrator | 2025-07-06 19:36:14.262483 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-06 19:36:14.671544 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:14.671653 | orchestrator | 2025-07-06 19:36:14.671670 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-06 19:36:15.018638 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:15.018743 | orchestrator | 2025-07-06 19:36:15.018759 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-06 19:36:15.062508 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:15.062605 | orchestrator | 2025-07-06 19:36:15.062619 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-06 19:36:15.129416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-06 19:36:15.129525 | orchestrator | 2025-07-06 19:36:15.129542 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-06 19:36:15.173567 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:15.173657 | orchestrator | 2025-07-06 19:36:15.173671 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-06 19:36:17.241717 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-06 19:36:17.241899 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-06 19:36:17.241917 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-06 19:36:17.241929 | orchestrator | 2025-07-06 19:36:17.241941 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-06 19:36:17.985435 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:17.985484 | orchestrator | 2025-07-06 19:36:17.985497 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-06 19:36:18.720470 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:18.720575 | orchestrator | 2025-07-06 19:36:18.720590 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-06 19:36:19.465259 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:19.465359 | orchestrator | 2025-07-06 19:36:19.465374 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-06 19:36:19.534310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-06 19:36:19.534409 | orchestrator | 2025-07-06 19:36:19.534423 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-06 19:36:19.593170 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:19.593261 | orchestrator | 2025-07-06 19:36:19.593275 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-06 19:36:20.274238 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-06 19:36:20.274346 | orchestrator | 2025-07-06 19:36:20.274362 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-06 19:36:20.361653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-06 19:36:20.361751 | orchestrator | 2025-07-06 19:36:20.361766 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-06 19:36:21.097193 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:21.097291 | orchestrator | 2025-07-06 19:36:21.097306 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-06 19:36:21.707000 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:21.707101 | orchestrator | 2025-07-06 19:36:21.707118 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-06 19:36:21.756511 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:21.756603 | orchestrator | 2025-07-06 19:36:21.756618 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-06 19:36:21.811682 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:21.811772 | orchestrator | 2025-07-06 19:36:21.811783 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-06 19:36:22.591129 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:22.591236 | orchestrator | 2025-07-06 19:36:22.591251 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-06 19:37:28.600037 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:28.600158 | orchestrator | 2025-07-06 19:37:28.600174 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-06 19:37:29.594531 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:29.594638 | orchestrator | 2025-07-06 19:37:29.594654 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-06 19:37:29.655571 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:37:29.655669 | orchestrator | 2025-07-06 19:37:29.655684 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-06 19:37:32.489631 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:32.489733 | orchestrator | 2025-07-06 19:37:32.489748 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-06 19:37:32.540215 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:32.540309 | orchestrator | 2025-07-06 19:37:32.540323 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-06 19:37:32.540337 | orchestrator | 2025-07-06 19:37:32.540349 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-06 19:37:32.593170 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:37:32.593261 | orchestrator | 2025-07-06 19:37:32.593299 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-06 19:38:32.645506 | orchestrator | Pausing for 60 seconds 2025-07-06 19:38:32.645622 | orchestrator | changed: [testbed-manager] 2025-07-06 19:38:32.645640 | orchestrator | 2025-07-06 19:38:32.645653 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-06 19:38:37.228627 | orchestrator | changed: [testbed-manager] 2025-07-06 19:38:37.228736 | orchestrator | 2025-07-06 19:38:37.228753 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-06 19:39:18.880205 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-06 19:39:18.880323 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-06 19:39:18.880340 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:18.880353 | orchestrator | 2025-07-06 19:39:18.880366 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-06 19:39:27.475795 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:27.475917 | orchestrator | 2025-07-06 19:39:27.475955 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-06 19:39:27.564498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-06 19:39:27.564599 | orchestrator | 2025-07-06 19:39:27.564614 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-06 19:39:27.564627 | orchestrator | 2025-07-06 19:39:27.564639 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-06 19:39:27.622621 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:39:27.622760 | orchestrator | 2025-07-06 19:39:27.622787 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:39:27.622810 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-06 19:39:27.622831 | orchestrator | 2025-07-06 19:39:27.717551 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-06 19:39:27.717651 | orchestrator | + deactivate 2025-07-06 19:39:27.717667 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-06 19:39:27.717681 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:39:27.717693 | orchestrator | + export PATH 2025-07-06 19:39:27.717708 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-06 19:39:27.717721 | orchestrator | + '[' -n '' ']' 2025-07-06 19:39:27.717732 | orchestrator | + hash -r 2025-07-06 19:39:27.717744 | orchestrator | + '[' -n '' ']' 2025-07-06 19:39:27.717754 | orchestrator | + unset VIRTUAL_ENV 2025-07-06 19:39:27.717765 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-06 19:39:27.717776 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-06 19:39:27.717787 | orchestrator | + unset -f deactivate 2025-07-06 19:39:27.717798 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-06 19:39:27.722499 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-06 19:39:27.722534 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-06 19:39:27.722547 | orchestrator | + local max_attempts=60 2025-07-06 19:39:27.722558 | orchestrator | + local name=ceph-ansible 2025-07-06 19:39:27.722570 | orchestrator | + local attempt_num=1 2025-07-06 19:39:27.723989 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:39:27.761405 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:39:27.761493 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-06 19:39:27.761507 | orchestrator | + local max_attempts=60 2025-07-06 19:39:27.761519 | orchestrator | + local name=kolla-ansible 2025-07-06 19:39:27.761530 | orchestrator | + local attempt_num=1 2025-07-06 19:39:27.762285 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-06 19:39:27.795389 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:39:27.795468 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-06 19:39:27.795478 | orchestrator | + local max_attempts=60 2025-07-06 19:39:27.795486 | orchestrator | + local name=osism-ansible 2025-07-06 19:39:27.795494 | orchestrator | + local attempt_num=1 2025-07-06 19:39:27.795851 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-06 19:39:27.826452 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:39:27.826555 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-06 19:39:27.826576 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-06 19:39:28.481373 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-06 19:39:28.678887 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-06 19:39:28.679003 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679028 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679043 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-06 19:39:28.679060 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-06 19:39:28.679070 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679079 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679087 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-07-06 19:39:28.679096 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679104 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-06 19:39:28.679113 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679121 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-06 19:39:28.679189 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679198 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.679207 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-06 19:39:28.690539 | orchestrator | ++ semver 9.1.0 7.0.0 2025-07-06 19:39:28.749794 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-06 19:39:28.749896 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-06 19:39:28.754100 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-06 19:39:30.471770 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:39:30.471880 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:39:30.471890 | orchestrator | Registering Redlock._release_script 2025-07-06 19:39:30.663874 | orchestrator | 2025-07-06 19:39:30 | INFO  | Task 24c4f875-90ff-443a-a00a-b13f129575ea (resolvconf) was prepared for execution. 2025-07-06 19:39:30.663972 | orchestrator | 2025-07-06 19:39:30 | INFO  | It takes a moment until task 24c4f875-90ff-443a-a00a-b13f129575ea (resolvconf) has been started and output is visible here. 2025-07-06 19:39:34.531328 | orchestrator | 2025-07-06 19:39:34.531467 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-06 19:39:34.531806 | orchestrator | 2025-07-06 19:39:34.532363 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:39:34.532773 | orchestrator | Sunday 06 July 2025 19:39:34 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-07-06 19:39:38.238369 | orchestrator | ok: [testbed-manager] 2025-07-06 19:39:38.238530 | orchestrator | 2025-07-06 19:39:38.238974 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-06 19:39:38.239359 | orchestrator | Sunday 06 July 2025 19:39:38 +0000 (0:00:03.710) 0:00:03.855 *********** 2025-07-06 19:39:38.297756 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:39:38.297921 | orchestrator | 2025-07-06 19:39:38.298791 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-06 19:39:38.299473 | orchestrator | Sunday 06 July 2025 19:39:38 +0000 (0:00:00.058) 0:00:03.914 *********** 2025-07-06 19:39:38.378087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-06 19:39:38.378202 | orchestrator | 2025-07-06 19:39:38.379358 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-06 19:39:38.380806 | orchestrator | Sunday 06 July 2025 19:39:38 +0000 (0:00:00.077) 0:00:03.991 *********** 2025-07-06 19:39:38.452711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 19:39:38.453127 | orchestrator | 2025-07-06 19:39:38.454534 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-06 19:39:38.455885 | orchestrator | Sunday 06 July 2025 19:39:38 +0000 (0:00:00.076) 0:00:04.067 *********** 2025-07-06 19:39:39.490786 | orchestrator | ok: [testbed-manager] 2025-07-06 19:39:39.490907 | orchestrator | 2025-07-06 19:39:39.491240 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-06 19:39:39.491874 | orchestrator | Sunday 06 July 2025 19:39:39 +0000 (0:00:01.038) 0:00:05.105 *********** 2025-07-06 19:39:39.541287 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:39:39.541724 | orchestrator | 2025-07-06 19:39:39.541752 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-06 19:39:39.542689 | orchestrator | Sunday 06 July 2025 19:39:39 +0000 (0:00:00.052) 0:00:05.158 *********** 2025-07-06 19:39:40.012562 | orchestrator | ok: [testbed-manager] 2025-07-06 19:39:40.012667 | orchestrator | 2025-07-06 19:39:40.013602 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-06 19:39:40.014446 | orchestrator | Sunday 06 July 2025 19:39:40 +0000 (0:00:00.469) 0:00:05.627 *********** 2025-07-06 19:39:40.089755 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:39:40.089852 | orchestrator | 2025-07-06 19:39:40.090186 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-06 19:39:40.090960 | orchestrator | Sunday 06 July 2025 19:39:40 +0000 (0:00:00.079) 0:00:05.706 *********** 2025-07-06 19:39:40.593668 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:40.593770 | orchestrator | 2025-07-06 19:39:40.594660 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-06 19:39:40.595721 | orchestrator | Sunday 06 July 2025 19:39:40 +0000 (0:00:00.501) 0:00:06.208 *********** 2025-07-06 19:39:41.644452 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:41.644658 | orchestrator | 2025-07-06 19:39:41.644814 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-06 19:39:41.645571 | orchestrator | Sunday 06 July 2025 19:39:41 +0000 (0:00:01.047) 0:00:07.255 *********** 2025-07-06 19:39:42.597096 | orchestrator | ok: [testbed-manager] 2025-07-06 19:39:42.597246 | orchestrator | 2025-07-06 19:39:42.597937 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-06 19:39:42.598824 | orchestrator | Sunday 06 July 2025 19:39:42 +0000 (0:00:00.957) 0:00:08.212 *********** 2025-07-06 19:39:42.691853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-06 19:39:42.692306 | orchestrator | 2025-07-06 19:39:42.693575 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-06 19:39:42.694969 | orchestrator | Sunday 06 July 2025 19:39:42 +0000 (0:00:00.096) 0:00:08.308 *********** 2025-07-06 19:39:43.871253 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:43.871844 | orchestrator | 2025-07-06 19:39:43.873014 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:39:43.876698 | orchestrator | 2025-07-06 19:39:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:39:43.876815 | orchestrator | 2025-07-06 19:39:43 | INFO  | Please wait and do not abort execution. 2025-07-06 19:39:43.877934 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 19:39:43.878791 | orchestrator | 2025-07-06 19:39:43.879816 | orchestrator | 2025-07-06 19:39:43.880552 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:39:43.881033 | orchestrator | Sunday 06 July 2025 19:39:43 +0000 (0:00:01.175) 0:00:09.484 *********** 2025-07-06 19:39:43.881719 | orchestrator | =============================================================================== 2025-07-06 19:39:43.882330 | orchestrator | Gathering Facts --------------------------------------------------------- 3.71s 2025-07-06 19:39:43.883573 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2025-07-06 19:39:43.884359 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-07-06 19:39:43.885266 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2025-07-06 19:39:43.886375 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-07-06 19:39:43.886830 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-07-06 19:39:43.887619 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-07-06 19:39:43.888309 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-07-06 19:39:43.888923 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-06 19:39:43.889656 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-07-06 19:39:43.890590 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-07-06 19:39:43.891178 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-07-06 19:39:43.891887 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-07-06 19:39:44.343499 | orchestrator | + osism apply sshconfig 2025-07-06 19:39:46.047194 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:39:46.047304 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:39:46.047322 | orchestrator | Registering Redlock._release_script 2025-07-06 19:39:46.103953 | orchestrator | 2025-07-06 19:39:46 | INFO  | Task bb7d1469-2488-40dc-8ac6-08ded7af60f4 (sshconfig) was prepared for execution. 2025-07-06 19:39:46.104054 | orchestrator | 2025-07-06 19:39:46 | INFO  | It takes a moment until task bb7d1469-2488-40dc-8ac6-08ded7af60f4 (sshconfig) has been started and output is visible here. 2025-07-06 19:39:50.044854 | orchestrator | 2025-07-06 19:39:50.044954 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-06 19:39:50.046639 | orchestrator | 2025-07-06 19:39:50.048431 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-06 19:39:50.049570 | orchestrator | Sunday 06 July 2025 19:39:50 +0000 (0:00:00.160) 0:00:00.160 *********** 2025-07-06 19:39:50.637979 | orchestrator | ok: [testbed-manager] 2025-07-06 19:39:50.638884 | orchestrator | 2025-07-06 19:39:50.640837 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-06 19:39:50.642009 | orchestrator | Sunday 06 July 2025 19:39:50 +0000 (0:00:00.596) 0:00:00.756 *********** 2025-07-06 19:39:51.126625 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:51.127362 | orchestrator | 2025-07-06 19:39:51.128055 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-06 19:39:51.129607 | orchestrator | Sunday 06 July 2025 19:39:51 +0000 (0:00:00.488) 0:00:01.244 *********** 2025-07-06 19:39:56.711612 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-06 19:39:56.712538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-06 19:39:56.712951 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-06 19:39:56.714283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-06 19:39:56.714917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-06 19:39:56.715594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-06 19:39:56.716277 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-06 19:39:56.716892 | orchestrator | 2025-07-06 19:39:56.717600 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-06 19:39:56.718112 | orchestrator | Sunday 06 July 2025 19:39:56 +0000 (0:00:05.584) 0:00:06.829 *********** 2025-07-06 19:39:56.769667 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:39:56.769817 | orchestrator | 2025-07-06 19:39:56.770494 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-06 19:39:56.771240 | orchestrator | Sunday 06 July 2025 19:39:56 +0000 (0:00:00.059) 0:00:06.889 *********** 2025-07-06 19:39:57.343589 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:57.343690 | orchestrator | 2025-07-06 19:39:57.346147 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:39:57.346329 | orchestrator | 2025-07-06 19:39:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:39:57.346349 | orchestrator | 2025-07-06 19:39:57 | INFO  | Please wait and do not abort execution. 2025-07-06 19:39:57.347469 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:39:57.347902 | orchestrator | 2025-07-06 19:39:57.348903 | orchestrator | 2025-07-06 19:39:57.349912 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:39:57.350696 | orchestrator | Sunday 06 July 2025 19:39:57 +0000 (0:00:00.571) 0:00:07.460 *********** 2025-07-06 19:39:57.351654 | orchestrator | =============================================================================== 2025-07-06 19:39:57.352676 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.58s 2025-07-06 19:39:57.353711 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-07-06 19:39:57.354342 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-07-06 19:39:57.354808 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-07-06 19:39:57.355444 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-07-06 19:39:57.802910 | orchestrator | + osism apply known-hosts 2025-07-06 19:39:59.403074 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:39:59.403239 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:39:59.403253 | orchestrator | Registering Redlock._release_script 2025-07-06 19:39:59.461013 | orchestrator | 2025-07-06 19:39:59 | INFO  | Task 1d0742b2-26de-40a3-a737-b49651664e26 (known-hosts) was prepared for execution. 2025-07-06 19:39:59.461094 | orchestrator | 2025-07-06 19:39:59 | INFO  | It takes a moment until task 1d0742b2-26de-40a3-a737-b49651664e26 (known-hosts) has been started and output is visible here. 2025-07-06 19:40:03.371020 | orchestrator | 2025-07-06 19:40:03.372230 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-06 19:40:03.372608 | orchestrator | 2025-07-06 19:40:03.373977 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-06 19:40:03.375801 | orchestrator | Sunday 06 July 2025 19:40:03 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-07-06 19:40:09.234480 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-06 19:40:09.234947 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-06 19:40:09.235332 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-06 19:40:09.236006 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-06 19:40:09.237118 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-06 19:40:09.238603 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-06 19:40:09.239909 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-06 19:40:09.240609 | orchestrator | 2025-07-06 19:40:09.240704 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-06 19:40:09.240987 | orchestrator | Sunday 06 July 2025 19:40:09 +0000 (0:00:05.863) 0:00:06.031 *********** 2025-07-06 19:40:09.387808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-06 19:40:09.388472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-06 19:40:09.389485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-06 19:40:09.390492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-06 19:40:09.390829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-06 19:40:09.391590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-06 19:40:09.392315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-06 19:40:09.392911 | orchestrator | 2025-07-06 19:40:09.393464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:09.394115 | orchestrator | Sunday 06 July 2025 19:40:09 +0000 (0:00:00.155) 0:00:06.186 *********** 2025-07-06 19:40:10.547701 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi2lE0JNl2XEMAf2JiRicLsA+ts8zBZMM5pEolWEUrder1ty+dYq4hq+jc3xdlvqp9OprKbP5Pi7aYrLSGHhDK7MhSprhdoRNl8K3mzaKT08Z4W7jc8dk71YGzeP9kXmL6/qvTYBs6vPQPZuVM4oqdUZiv3nyEHRrvXUdIG8eYiVhEtPP9XLmQuQAYJ6skhnfVVLGzFW2FPwosM7dOcOxj+7zyceZI+6DCnseXetFVQzUldQHaaSni5sdwZorfyB+SzZC8YmzlOGDjn8G+Uc/39JH6g3Ell5ekr7vwIqh14bJMbxr888KfBu1ReMmEq3Z58sZoi/CAHjFVs+EKKvzeLFf+zxi3+tFrjT3KAvUBvEhn9fCz9oAMBbjM9LyPJnGnzH1NiY8TpLouIu3rI5pgT28dUZ48hsnQJhu4LES0lJT7nejUdeDWB3Z++q3m/hwSNY0n99yOrWSLlwfd9b/jU+iGYjCeXdvyFAf+050euy2LsrFyRlPy5PyCimma4LE=) 2025-07-06 19:40:10.548443 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1JgsUqMiYm6jwQZfmWbRuddTm/xpFe+cJjF5wbQPBzShBp6o0A2uHuLxO7UXEnJIs7i/0KNaT+1HzPyOWGOLE=) 2025-07-06 19:40:10.548973 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJpKSPjZtLsOFtqiMxAAGUL0Qf2+V41BmUzTHk/vacN/) 2025-07-06 19:40:10.549382 | orchestrator | 2025-07-06 19:40:10.550179 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:10.550957 | orchestrator | Sunday 06 July 2025 19:40:10 +0000 (0:00:01.158) 0:00:07.345 *********** 2025-07-06 19:40:11.588081 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLDMMdwyW9+DqUN0s/CUaOmkZ+RWQvBYOUl7cQizOyFK+IFCnG7FrZFVf807Dm2drglMfewV28Jxn73fW6RsE9k=) 2025-07-06 19:40:11.588310 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0CLHJ0KzhiBvzJyzZ95qBVHi1b2WkyVCb4+pKovNI97bioZsQSXKcnekTcUCDZU1WmyrVVveqY483S7wffoLhraFghrtVC37PjjJL7RjwGcQ5a07oJC/29A+zEKZ7m3apfmCpvFGwYOgFthfQW4MdLDjdpMgZ5a3KWYOax9T0dYgYUonfrBN5W+NwidvaDg7tTLmhgCv62+Ycd+WWXddG+aiWarMg0TJr95pAHPies1k95yzOni+gjfUVytG3PUPsv1miLCU+HXhjTyBZaN2+NRY3z7n74+mkUEdIQE8Ym11/Pn6cSqGAHPS36P+gnWweDgA/3mgsKoT4ra7CSUoF49KEuerSAQ09bzWAeOyzw3+cBKhlmtbnfsMBpmCNxjTy6TtzTWUVCXSOpMwdjIlXc6Y1Rsx1neSUQ9Icy4CH3oZAx32bOknvS8zC99TbB0s5N75AElO9fjcrGX+ab71cx4LOv/RY8ZTs0KrZFOqLGnaBPzafPPJlfA46MVqzuyc=) 2025-07-06 19:40:11.588451 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE8R/k5d+bKS4f4S3Hd9mUOqLWEh2sWidRkzVb9PqnAz) 2025-07-06 19:40:11.588681 | orchestrator | 2025-07-06 19:40:11.589290 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:11.590164 | orchestrator | Sunday 06 July 2025 19:40:11 +0000 (0:00:01.038) 0:00:08.383 *********** 2025-07-06 19:40:12.627444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9oAo68AgypH9AQ+OEuM/C7ILQQNz4IZe6UP3Gh2UFSF5+i80brUcCnmiSNIp6AfRMOAtEKTtzf9lGRoLZgBznExwTFj+91HeadfICgsdzhgZjBS0uHHn8sQI+q4MrogrbAI+6nB77c0bdbWWDIGtTm2+XmjOePCNUbM0HrNYU2NxAMKO6AdJa+y1dC9O5hX3rWy2rvoFQziV06gpSRXZu/blj0yrc8sGoU0kOvg+uy+BU8n5t8hgHJiIbkFP3YfKVH98Gd8mvhHNbqNkxYuTAbIbv8v7ZapbT7R9/6JI0Svy7M4+7tBDOWUGEMirt6Vqh/a/mfCWTOxjQ9EPUv9v6dgOPOpY3Q+irlcCHy5p03Z0Tx6/lS4bcpJk9DdJbaby9Oe7EJiOwv7KGaoRg7zMIToaXpKuOHDnx6A2aTbBa9LkUCflIhJCUJALVAz4AvLpwc8lZ+4gUetrKSyiAiFZBgHB/UaLv1bmUnZ1S4biDiNlG+zGr7EUyrC4OS9yERfU=) 2025-07-06 19:40:12.628500 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXvoeSKZbC4qtYMlsjrSmQ3qAmdYG8fB3YnE1HJ6QA0g5RdtPGJ9gTlYyDa+U1yt5AfG+wAQtNCyEmMVoA8ARQ=) 2025-07-06 19:40:12.629618 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHRKfEq5S7TD+UblqKOYRVUiAeYZt2HGqGxyt9ySDaYw) 2025-07-06 19:40:12.629691 | orchestrator | 2025-07-06 19:40:12.630690 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:12.631519 | orchestrator | Sunday 06 July 2025 19:40:12 +0000 (0:00:01.041) 0:00:09.425 *********** 2025-07-06 19:40:13.663342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+oSO2+jJVwgV+J4gzCwUTxJH+cN1ZjlWNVNh/ji4lAkbYt6uKk59nWi/I9+bLst6l3X2RHmwyQAWU/lpxuJYWjw5tdFHQMXEfM3TFfzIBabsLybdBRsye7vIjc3iC19Fs9BDKa/LhfzddHy5GS9gLxZKe+GoH6Me+Kjpvekuk0QwljYmSC489XD3RUnddXR9BWGDObcBoR8xo/bkvL4myPqHs6ib4ZZ1CXFR+0Mfbh7d5PiGRGqmMzv2aNQP3rZaaTWRrNQ84SoEXSZ9KLNlDFlPYwboFe7c/Bt6OBwyxzzBI7K98RCyGD9Xln2sYhszvmFai0UxqtKyphtk4qU44nDzdhDRkAEIfPTqQokTyN02IyyHC31P0yBVhpRgfdLx6jW8Vs5t1jos4Ongn4Rf3tHbfZijZhCOV5iBBPflricAhd87nDeUn7RsfVSmqZ28zOg0WVIAdBuCkWjJz2OrapB4mbQ/NuGSlmyWIJAGyz2GBPuGb0MNVdr7UvB1ZBEM=) 2025-07-06 19:40:13.663821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/RR6ibV9KOGVxovoPIyer41d6qZx7WeXrEadRcq3Fm4oNkUeoetRcYkYXgZjYxtp7ABbo2vhAjX/AN5P1kMiE=) 2025-07-06 19:40:13.665010 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILn7Z5bxIgfsatU1OhGQt1XdTT2/8FNkwqi+QwTka+qt) 2025-07-06 19:40:13.665939 | orchestrator | 2025-07-06 19:40:13.666956 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:13.667861 | orchestrator | Sunday 06 July 2025 19:40:13 +0000 (0:00:01.035) 0:00:10.461 *********** 2025-07-06 19:40:14.681918 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEG9PLhnT3UADm8WC5ILC30omfRVpUakNJXSQMCzD+XSHPBCqJbxcQcHD/DfTwbd46c4UMnYcLv++vrtAGJRtek=) 2025-07-06 19:40:14.683177 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCPF4/uarad40MWSBR4LaDkBnyB57fZ7xuKd6OkkRwTzABez3z+HZdB9YRYcmS9mnVexW6Wu64z3TjgB2+k0pOMu73XUG+j0qOpYuPBLagDIP9L/XEPxRDM2D4zVwdPdirB0IBnvO/Kx2lkrlHzrxL8rGnVfuawHLbtuqePeXXDbSpf9mL9MVe/P3OSgqhdaMnrmWD+4+rC+yKAmdlMuSdDFBzxQZvplgCqgmBk6Hh1DhUYewBClfdYsAYCdEd77euWoWYkmjXOzZNVvIvgTI/p5a20vCIVxUCY43sOOSmrTjFTi+0mPWOT7vXPPB8pzcRMWPC7/U7hTC2mz3jOna5kgIOYTkdZ4ctgOMigMFSlOnuRdVQ6vgUSUV/7sTCKLtZCSPZBkEUOzOC4fa8HUCB18LeKiZ/KdbQzy21gbEC++W3h6q/mxEP0T8nQ0gP4OtRHQtyjwS4dtp7OlYYGy9I/VbaC0Rvsf46qH1f61hmu5ijhySlAi64GxFLggMcxfs=) 2025-07-06 19:40:14.684047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLMuehTJPSUlD5vTBiatZ64PnsSeSlpatJNJBG8YdiM) 2025-07-06 19:40:14.685089 | orchestrator | 2025-07-06 19:40:14.685780 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:14.686291 | orchestrator | Sunday 06 July 2025 19:40:14 +0000 (0:00:01.017) 0:00:11.478 *********** 2025-07-06 19:40:15.776947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKh5JI3g3wlW7s7NuqyFNUqvmrBt4mEHSyvGuk9IJePgqAHPmwlZHnqZzSPd0kwU6i9o8R+S+D91YDCv3m7EpVw=) 2025-07-06 19:40:15.777842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvBB10MOw1jFzeASOMbchbsn+PrFQuclca/aYqRLxaRHDJp0a9xpib5uQGxWiWS7tjFSIZsjUjjjbY8ZAXSBpPyzK0P5ebY7scKQFPfCBm69KaDYo2sBca4o0jXxzkI8ziAecFQk4o3D1DlMqDJRMNryJ+1ofkU0ZfCYZ0wuBaouYcZbsQaFCLGv6vg239EFu2HKpist7t68p3ORljgp4B9fDFxDqSdQOs78nCWDsw32UoIMJHA49ybZhLLx+oIUYRQqlukG5TzoPbR7hD0CC/0Lu2nTcBhPxF0uFOd/cstMZ/YJxMIbec/rJGU7lzMAURVKp85NxVzbmpWqagKR4Iem7CfFKtktNq9sCid3vJyMe0lU/lmsO9SnQ1Rqr4ZZgXjcyuluk8g8YYjVhVTmzAddl+e1vhFvIbPC9OY2GRJZZxMTp3OSXNREEg+pu9p+VIfQYxIoZaG71b7w2tISz9RoBrRTn1hvQmdq1HMiUVVYtmWttOHHRgAiYjOBrVpI0=) 2025-07-06 19:40:15.778276 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP+7K1wcotynFrt2qLpAznlZ5M8f4ddNgL3ROgPIUF76) 2025-07-06 19:40:15.779114 | orchestrator | 2025-07-06 19:40:15.779772 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:15.780357 | orchestrator | Sunday 06 July 2025 19:40:15 +0000 (0:00:01.096) 0:00:12.574 *********** 2025-07-06 19:40:16.809968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzmr9BmBxH4K1/pnVj66PcXq0VQCiCfYs5zE4/R5V+v) 2025-07-06 19:40:16.810455 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTHgiozb5s8fhna2j+PJ3YNbK16HRVXjG+OpC54+2rUcHG7pbVC2nDX55349zzmzFJAZOUbo6xKdtBmqO9xU9mZ39CjjL0nMQxIxSr8qUN3WFij5xPDieFrgaHeVZmGp1Q9ZnT2PpOlj5sTeoJUbxNlnVzML3/DMucR1iAUPTwIQ5Nmv0Q29SPDJGyPFetV/77ucU6fx2Ry5H81jMeCyfbMKeU6OrW62pJQjk1pixqWLfOTsEy45NIECIKaZ8ZAiZzPggzsndN99bfKbj3NHh/XsPF/bcqEXtlMX/DcVUwoeN7NLCngfcBbU7SzwhDEsxu88nOeTgslGmyUNJBqjWKwNWBQgUKe2FDPCs7jolNNFH+g5WeV7ErIpjAYwx6aT9pDGOytV6SP+NIDWKw8uW+Qfd2d4IEIXowtyEaQ1f7EHLtllZHhK1koAFZQcesXA4oIx9Lgk5A6bEZpytLUlkPwLDVP1ba1ifKTudEbq1XxArMhCOjDSlz6XBayPsgIT8=) 2025-07-06 19:40:16.810984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWp0EFWR4cWAXYbiR0Q6cGqxKt9u6nh/xh1OcsEKsRGuUwmUTc5zNXPrh78eWsVAsHa8gtclyrBTJ0JjwyjJZ4=) 2025-07-06 19:40:16.811654 | orchestrator | 2025-07-06 19:40:16.812246 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-06 19:40:16.812675 | orchestrator | Sunday 06 July 2025 19:40:16 +0000 (0:00:01.032) 0:00:13.606 *********** 2025-07-06 19:40:22.000384 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-06 19:40:22.001036 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-06 19:40:22.001454 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-06 19:40:22.002545 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-06 19:40:22.003401 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-06 19:40:22.004021 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-06 19:40:22.004669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-06 19:40:22.005110 | orchestrator | 2025-07-06 19:40:22.005555 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-06 19:40:22.005900 | orchestrator | Sunday 06 July 2025 19:40:21 +0000 (0:00:05.190) 0:00:18.797 *********** 2025-07-06 19:40:22.170124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-06 19:40:22.171231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-06 19:40:22.171939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-06 19:40:22.173576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-06 19:40:22.174761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-06 19:40:22.175367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-06 19:40:22.175934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-06 19:40:22.176633 | orchestrator | 2025-07-06 19:40:22.176841 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:22.177332 | orchestrator | Sunday 06 July 2025 19:40:22 +0000 (0:00:00.171) 0:00:18.968 *********** 2025-07-06 19:40:23.227420 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJpKSPjZtLsOFtqiMxAAGUL0Qf2+V41BmUzTHk/vacN/) 2025-07-06 19:40:23.228457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi2lE0JNl2XEMAf2JiRicLsA+ts8zBZMM5pEolWEUrder1ty+dYq4hq+jc3xdlvqp9OprKbP5Pi7aYrLSGHhDK7MhSprhdoRNl8K3mzaKT08Z4W7jc8dk71YGzeP9kXmL6/qvTYBs6vPQPZuVM4oqdUZiv3nyEHRrvXUdIG8eYiVhEtPP9XLmQuQAYJ6skhnfVVLGzFW2FPwosM7dOcOxj+7zyceZI+6DCnseXetFVQzUldQHaaSni5sdwZorfyB+SzZC8YmzlOGDjn8G+Uc/39JH6g3Ell5ekr7vwIqh14bJMbxr888KfBu1ReMmEq3Z58sZoi/CAHjFVs+EKKvzeLFf+zxi3+tFrjT3KAvUBvEhn9fCz9oAMBbjM9LyPJnGnzH1NiY8TpLouIu3rI5pgT28dUZ48hsnQJhu4LES0lJT7nejUdeDWB3Z++q3m/hwSNY0n99yOrWSLlwfd9b/jU+iGYjCeXdvyFAf+050euy2LsrFyRlPy5PyCimma4LE=) 2025-07-06 19:40:23.229479 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1JgsUqMiYm6jwQZfmWbRuddTm/xpFe+cJjF5wbQPBzShBp6o0A2uHuLxO7UXEnJIs7i/0KNaT+1HzPyOWGOLE=) 2025-07-06 19:40:23.230143 | orchestrator | 2025-07-06 19:40:23.231033 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:23.231799 | orchestrator | Sunday 06 July 2025 19:40:23 +0000 (0:00:01.055) 0:00:20.023 *********** 2025-07-06 19:40:24.303260 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLDMMdwyW9+DqUN0s/CUaOmkZ+RWQvBYOUl7cQizOyFK+IFCnG7FrZFVf807Dm2drglMfewV28Jxn73fW6RsE9k=) 2025-07-06 19:40:24.304110 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0CLHJ0KzhiBvzJyzZ95qBVHi1b2WkyVCb4+pKovNI97bioZsQSXKcnekTcUCDZU1WmyrVVveqY483S7wffoLhraFghrtVC37PjjJL7RjwGcQ5a07oJC/29A+zEKZ7m3apfmCpvFGwYOgFthfQW4MdLDjdpMgZ5a3KWYOax9T0dYgYUonfrBN5W+NwidvaDg7tTLmhgCv62+Ycd+WWXddG+aiWarMg0TJr95pAHPies1k95yzOni+gjfUVytG3PUPsv1miLCU+HXhjTyBZaN2+NRY3z7n74+mkUEdIQE8Ym11/Pn6cSqGAHPS36P+gnWweDgA/3mgsKoT4ra7CSUoF49KEuerSAQ09bzWAeOyzw3+cBKhlmtbnfsMBpmCNxjTy6TtzTWUVCXSOpMwdjIlXc6Y1Rsx1neSUQ9Icy4CH3oZAx32bOknvS8zC99TbB0s5N75AElO9fjcrGX+ab71cx4LOv/RY8ZTs0KrZFOqLGnaBPzafPPJlfA46MVqzuyc=) 2025-07-06 19:40:24.304572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE8R/k5d+bKS4f4S3Hd9mUOqLWEh2sWidRkzVb9PqnAz) 2025-07-06 19:40:24.305608 | orchestrator | 2025-07-06 19:40:24.306140 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:24.307125 | orchestrator | Sunday 06 July 2025 19:40:24 +0000 (0:00:01.077) 0:00:21.100 *********** 2025-07-06 19:40:25.340519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9oAo68AgypH9AQ+OEuM/C7ILQQNz4IZe6UP3Gh2UFSF5+i80brUcCnmiSNIp6AfRMOAtEKTtzf9lGRoLZgBznExwTFj+91HeadfICgsdzhgZjBS0uHHn8sQI+q4MrogrbAI+6nB77c0bdbWWDIGtTm2+XmjOePCNUbM0HrNYU2NxAMKO6AdJa+y1dC9O5hX3rWy2rvoFQziV06gpSRXZu/blj0yrc8sGoU0kOvg+uy+BU8n5t8hgHJiIbkFP3YfKVH98Gd8mvhHNbqNkxYuTAbIbv8v7ZapbT7R9/6JI0Svy7M4+7tBDOWUGEMirt6Vqh/a/mfCWTOxjQ9EPUv9v6dgOPOpY3Q+irlcCHy5p03Z0Tx6/lS4bcpJk9DdJbaby9Oe7EJiOwv7KGaoRg7zMIToaXpKuOHDnx6A2aTbBa9LkUCflIhJCUJALVAz4AvLpwc8lZ+4gUetrKSyiAiFZBgHB/UaLv1bmUnZ1S4biDiNlG+zGr7EUyrC4OS9yERfU=) 2025-07-06 19:40:25.341133 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXvoeSKZbC4qtYMlsjrSmQ3qAmdYG8fB3YnE1HJ6QA0g5RdtPGJ9gTlYyDa+U1yt5AfG+wAQtNCyEmMVoA8ARQ=) 2025-07-06 19:40:25.341945 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHRKfEq5S7TD+UblqKOYRVUiAeYZt2HGqGxyt9ySDaYw) 2025-07-06 19:40:25.343343 | orchestrator | 2025-07-06 19:40:25.343713 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:25.344591 | orchestrator | Sunday 06 July 2025 19:40:25 +0000 (0:00:01.037) 0:00:22.138 *********** 2025-07-06 19:40:26.395612 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILn7Z5bxIgfsatU1OhGQt1XdTT2/8FNkwqi+QwTka+qt) 2025-07-06 19:40:26.396742 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+oSO2+jJVwgV+J4gzCwUTxJH+cN1ZjlWNVNh/ji4lAkbYt6uKk59nWi/I9+bLst6l3X2RHmwyQAWU/lpxuJYWjw5tdFHQMXEfM3TFfzIBabsLybdBRsye7vIjc3iC19Fs9BDKa/LhfzddHy5GS9gLxZKe+GoH6Me+Kjpvekuk0QwljYmSC489XD3RUnddXR9BWGDObcBoR8xo/bkvL4myPqHs6ib4ZZ1CXFR+0Mfbh7d5PiGRGqmMzv2aNQP3rZaaTWRrNQ84SoEXSZ9KLNlDFlPYwboFe7c/Bt6OBwyxzzBI7K98RCyGD9Xln2sYhszvmFai0UxqtKyphtk4qU44nDzdhDRkAEIfPTqQokTyN02IyyHC31P0yBVhpRgfdLx6jW8Vs5t1jos4Ongn4Rf3tHbfZijZhCOV5iBBPflricAhd87nDeUn7RsfVSmqZ28zOg0WVIAdBuCkWjJz2OrapB4mbQ/NuGSlmyWIJAGyz2GBPuGb0MNVdr7UvB1ZBEM=) 2025-07-06 19:40:26.396884 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/RR6ibV9KOGVxovoPIyer41d6qZx7WeXrEadRcq3Fm4oNkUeoetRcYkYXgZjYxtp7ABbo2vhAjX/AN5P1kMiE=) 2025-07-06 19:40:26.397644 | orchestrator | 2025-07-06 19:40:26.398014 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:26.398475 | orchestrator | Sunday 06 July 2025 19:40:26 +0000 (0:00:01.054) 0:00:23.192 *********** 2025-07-06 19:40:27.420714 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEG9PLhnT3UADm8WC5ILC30omfRVpUakNJXSQMCzD+XSHPBCqJbxcQcHD/DfTwbd46c4UMnYcLv++vrtAGJRtek=) 2025-07-06 19:40:27.421474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCPF4/uarad40MWSBR4LaDkBnyB57fZ7xuKd6OkkRwTzABez3z+HZdB9YRYcmS9mnVexW6Wu64z3TjgB2+k0pOMu73XUG+j0qOpYuPBLagDIP9L/XEPxRDM2D4zVwdPdirB0IBnvO/Kx2lkrlHzrxL8rGnVfuawHLbtuqePeXXDbSpf9mL9MVe/P3OSgqhdaMnrmWD+4+rC+yKAmdlMuSdDFBzxQZvplgCqgmBk6Hh1DhUYewBClfdYsAYCdEd77euWoWYkmjXOzZNVvIvgTI/p5a20vCIVxUCY43sOOSmrTjFTi+0mPWOT7vXPPB8pzcRMWPC7/U7hTC2mz3jOna5kgIOYTkdZ4ctgOMigMFSlOnuRdVQ6vgUSUV/7sTCKLtZCSPZBkEUOzOC4fa8HUCB18LeKiZ/KdbQzy21gbEC++W3h6q/mxEP0T8nQ0gP4OtRHQtyjwS4dtp7OlYYGy9I/VbaC0Rvsf46qH1f61hmu5ijhySlAi64GxFLggMcxfs=) 2025-07-06 19:40:27.422165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLMuehTJPSUlD5vTBiatZ64PnsSeSlpatJNJBG8YdiM) 2025-07-06 19:40:27.423753 | orchestrator | 2025-07-06 19:40:27.424346 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:27.425034 | orchestrator | Sunday 06 July 2025 19:40:27 +0000 (0:00:01.024) 0:00:24.217 *********** 2025-07-06 19:40:28.458967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP+7K1wcotynFrt2qLpAznlZ5M8f4ddNgL3ROgPIUF76) 2025-07-06 19:40:28.461822 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvBB10MOw1jFzeASOMbchbsn+PrFQuclca/aYqRLxaRHDJp0a9xpib5uQGxWiWS7tjFSIZsjUjjjbY8ZAXSBpPyzK0P5ebY7scKQFPfCBm69KaDYo2sBca4o0jXxzkI8ziAecFQk4o3D1DlMqDJRMNryJ+1ofkU0ZfCYZ0wuBaouYcZbsQaFCLGv6vg239EFu2HKpist7t68p3ORljgp4B9fDFxDqSdQOs78nCWDsw32UoIMJHA49ybZhLLx+oIUYRQqlukG5TzoPbR7hD0CC/0Lu2nTcBhPxF0uFOd/cstMZ/YJxMIbec/rJGU7lzMAURVKp85NxVzbmpWqagKR4Iem7CfFKtktNq9sCid3vJyMe0lU/lmsO9SnQ1Rqr4ZZgXjcyuluk8g8YYjVhVTmzAddl+e1vhFvIbPC9OY2GRJZZxMTp3OSXNREEg+pu9p+VIfQYxIoZaG71b7w2tISz9RoBrRTn1hvQmdq1HMiUVVYtmWttOHHRgAiYjOBrVpI0=) 2025-07-06 19:40:28.462913 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKh5JI3g3wlW7s7NuqyFNUqvmrBt4mEHSyvGuk9IJePgqAHPmwlZHnqZzSPd0kwU6i9o8R+S+D91YDCv3m7EpVw=) 2025-07-06 19:40:28.464069 | orchestrator | 2025-07-06 19:40:28.465067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:40:28.465757 | orchestrator | Sunday 06 July 2025 19:40:28 +0000 (0:00:01.039) 0:00:25.257 *********** 2025-07-06 19:40:29.465409 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWp0EFWR4cWAXYbiR0Q6cGqxKt9u6nh/xh1OcsEKsRGuUwmUTc5zNXPrh78eWsVAsHa8gtclyrBTJ0JjwyjJZ4=) 2025-07-06 19:40:29.465657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTHgiozb5s8fhna2j+PJ3YNbK16HRVXjG+OpC54+2rUcHG7pbVC2nDX55349zzmzFJAZOUbo6xKdtBmqO9xU9mZ39CjjL0nMQxIxSr8qUN3WFij5xPDieFrgaHeVZmGp1Q9ZnT2PpOlj5sTeoJUbxNlnVzML3/DMucR1iAUPTwIQ5Nmv0Q29SPDJGyPFetV/77ucU6fx2Ry5H81jMeCyfbMKeU6OrW62pJQjk1pixqWLfOTsEy45NIECIKaZ8ZAiZzPggzsndN99bfKbj3NHh/XsPF/bcqEXtlMX/DcVUwoeN7NLCngfcBbU7SzwhDEsxu88nOeTgslGmyUNJBqjWKwNWBQgUKe2FDPCs7jolNNFH+g5WeV7ErIpjAYwx6aT9pDGOytV6SP+NIDWKw8uW+Qfd2d4IEIXowtyEaQ1f7EHLtllZHhK1koAFZQcesXA4oIx9Lgk5A6bEZpytLUlkPwLDVP1ba1ifKTudEbq1XxArMhCOjDSlz6XBayPsgIT8=) 2025-07-06 19:40:29.466682 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzmr9BmBxH4K1/pnVj66PcXq0VQCiCfYs5zE4/R5V+v) 2025-07-06 19:40:29.467185 | orchestrator | 2025-07-06 19:40:29.468525 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-06 19:40:29.469583 | orchestrator | Sunday 06 July 2025 19:40:29 +0000 (0:00:01.006) 0:00:26.263 *********** 2025-07-06 19:40:29.643949 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-06 19:40:29.645046 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-06 19:40:29.646423 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-06 19:40:29.646492 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-06 19:40:29.648129 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-06 19:40:29.649439 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-06 19:40:29.650277 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-06 19:40:29.651192 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:29.652187 | orchestrator | 2025-07-06 19:40:29.653066 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-06 19:40:29.653786 | orchestrator | Sunday 06 July 2025 19:40:29 +0000 (0:00:00.179) 0:00:26.442 *********** 2025-07-06 19:40:29.702172 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:29.702388 | orchestrator | 2025-07-06 19:40:29.703702 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-06 19:40:29.704895 | orchestrator | Sunday 06 July 2025 19:40:29 +0000 (0:00:00.059) 0:00:26.501 *********** 2025-07-06 19:40:29.748969 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:29.750140 | orchestrator | 2025-07-06 19:40:29.750618 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-06 19:40:29.751756 | orchestrator | Sunday 06 July 2025 19:40:29 +0000 (0:00:00.047) 0:00:26.549 *********** 2025-07-06 19:40:30.266706 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:30.266809 | orchestrator | 2025-07-06 19:40:30.266825 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:40:30.266873 | orchestrator | 2025-07-06 19:40:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:40:30.266888 | orchestrator | 2025-07-06 19:40:30 | INFO  | Please wait and do not abort execution. 2025-07-06 19:40:30.266957 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 19:40:30.267845 | orchestrator | 2025-07-06 19:40:30.269750 | orchestrator | 2025-07-06 19:40:30.270442 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:40:30.272028 | orchestrator | Sunday 06 July 2025 19:40:30 +0000 (0:00:00.510) 0:00:27.059 *********** 2025-07-06 19:40:30.272795 | orchestrator | =============================================================================== 2025-07-06 19:40:30.273466 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.86s 2025-07-06 19:40:30.273726 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2025-07-06 19:40:30.274091 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-07-06 19:40:30.274623 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-06 19:40:30.274727 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-06 19:40:30.275264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-06 19:40:30.275509 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-06 19:40:30.275945 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:40:30.276296 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:40:30.277392 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:40:30.277847 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:40:30.278682 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:40:30.279324 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-07-06 19:40:30.279846 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-06 19:40:30.280511 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-06 19:40:30.281344 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-07-06 19:40:30.281517 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2025-07-06 19:40:30.281981 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-07-06 19:40:30.282550 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-07-06 19:40:30.282753 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-07-06 19:40:30.731623 | orchestrator | + osism apply squid 2025-07-06 19:40:32.418182 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:40:32.418346 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:40:32.418363 | orchestrator | Registering Redlock._release_script 2025-07-06 19:40:32.475284 | orchestrator | 2025-07-06 19:40:32 | INFO  | Task 6e6a4caf-bb10-4331-b564-17add5321bf2 (squid) was prepared for execution. 2025-07-06 19:40:32.475401 | orchestrator | 2025-07-06 19:40:32 | INFO  | It takes a moment until task 6e6a4caf-bb10-4331-b564-17add5321bf2 (squid) has been started and output is visible here. 2025-07-06 19:40:36.329289 | orchestrator | 2025-07-06 19:40:36.330434 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-06 19:40:36.331893 | orchestrator | 2025-07-06 19:40:36.332791 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-06 19:40:36.333890 | orchestrator | Sunday 06 July 2025 19:40:36 +0000 (0:00:00.173) 0:00:00.173 *********** 2025-07-06 19:40:36.405063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 19:40:36.405724 | orchestrator | 2025-07-06 19:40:36.407831 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-06 19:40:36.408387 | orchestrator | Sunday 06 July 2025 19:40:36 +0000 (0:00:00.078) 0:00:00.252 *********** 2025-07-06 19:40:37.835977 | orchestrator | ok: [testbed-manager] 2025-07-06 19:40:37.837564 | orchestrator | 2025-07-06 19:40:37.837965 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-06 19:40:37.839174 | orchestrator | Sunday 06 July 2025 19:40:37 +0000 (0:00:01.430) 0:00:01.682 *********** 2025-07-06 19:40:38.979015 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-06 19:40:38.980264 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-06 19:40:38.980774 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-06 19:40:38.981612 | orchestrator | 2025-07-06 19:40:38.982323 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-06 19:40:38.983888 | orchestrator | Sunday 06 July 2025 19:40:38 +0000 (0:00:01.142) 0:00:02.824 *********** 2025-07-06 19:40:40.014316 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-06 19:40:40.014685 | orchestrator | 2025-07-06 19:40:40.015923 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-06 19:40:40.016731 | orchestrator | Sunday 06 July 2025 19:40:40 +0000 (0:00:01.035) 0:00:03.860 *********** 2025-07-06 19:40:40.371891 | orchestrator | ok: [testbed-manager] 2025-07-06 19:40:40.372747 | orchestrator | 2025-07-06 19:40:40.374367 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-06 19:40:40.376092 | orchestrator | Sunday 06 July 2025 19:40:40 +0000 (0:00:00.358) 0:00:04.218 *********** 2025-07-06 19:40:41.249607 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:41.249828 | orchestrator | 2025-07-06 19:40:41.250839 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-06 19:40:41.253274 | orchestrator | Sunday 06 July 2025 19:40:41 +0000 (0:00:00.875) 0:00:05.094 *********** 2025-07-06 19:41:13.113765 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-06 19:41:13.113881 | orchestrator | ok: [testbed-manager] 2025-07-06 19:41:13.114069 | orchestrator | 2025-07-06 19:41:13.114142 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-06 19:41:13.115127 | orchestrator | Sunday 06 July 2025 19:41:13 +0000 (0:00:31.860) 0:00:36.955 *********** 2025-07-06 19:41:25.629738 | orchestrator | changed: [testbed-manager] 2025-07-06 19:41:25.629862 | orchestrator | 2025-07-06 19:41:25.629878 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-06 19:41:25.629891 | orchestrator | Sunday 06 July 2025 19:41:25 +0000 (0:00:12.518) 0:00:49.473 *********** 2025-07-06 19:42:25.698875 | orchestrator | Pausing for 60 seconds 2025-07-06 19:42:25.698996 | orchestrator | changed: [testbed-manager] 2025-07-06 19:42:25.699068 | orchestrator | 2025-07-06 19:42:25.699309 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-06 19:42:25.700047 | orchestrator | Sunday 06 July 2025 19:42:25 +0000 (0:01:00.066) 0:01:49.539 *********** 2025-07-06 19:42:25.757069 | orchestrator | ok: [testbed-manager] 2025-07-06 19:42:25.757337 | orchestrator | 2025-07-06 19:42:25.758130 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-06 19:42:25.759044 | orchestrator | Sunday 06 July 2025 19:42:25 +0000 (0:00:00.063) 0:01:49.603 *********** 2025-07-06 19:42:26.386328 | orchestrator | changed: [testbed-manager] 2025-07-06 19:42:26.386494 | orchestrator | 2025-07-06 19:42:26.387276 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:42:26.387491 | orchestrator | 2025-07-06 19:42:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:42:26.388411 | orchestrator | 2025-07-06 19:42:26 | INFO  | Please wait and do not abort execution. 2025-07-06 19:42:26.388922 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:42:26.389617 | orchestrator | 2025-07-06 19:42:26.390411 | orchestrator | 2025-07-06 19:42:26.391085 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:42:26.392012 | orchestrator | Sunday 06 July 2025 19:42:26 +0000 (0:00:00.628) 0:01:50.231 *********** 2025-07-06 19:42:26.392480 | orchestrator | =============================================================================== 2025-07-06 19:42:26.393253 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-07-06 19:42:26.393632 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.86s 2025-07-06 19:42:26.394278 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.52s 2025-07-06 19:42:26.394778 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-07-06 19:42:26.395221 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2025-07-06 19:42:26.396172 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.04s 2025-07-06 19:42:26.396196 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-07-06 19:42:26.396615 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-07-06 19:42:26.396975 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-07-06 19:42:26.397431 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-07-06 19:42:26.397790 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-07-06 19:42:26.847733 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-07-06 19:42:26.847838 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-07-06 19:42:26.851394 | orchestrator | ++ semver 9.1.0 9.0.0 2025-07-06 19:42:26.902617 | orchestrator | + [[ 1 -lt 0 ]] 2025-07-06 19:42:26.902773 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-06 19:42:28.557441 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:42:28.557545 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:42:28.557560 | orchestrator | Registering Redlock._release_script 2025-07-06 19:42:28.612769 | orchestrator | 2025-07-06 19:42:28 | INFO  | Task c72c3565-f917-4fa9-8acb-6c6de27dbfaf (operator) was prepared for execution. 2025-07-06 19:42:28.612858 | orchestrator | 2025-07-06 19:42:28 | INFO  | It takes a moment until task c72c3565-f917-4fa9-8acb-6c6de27dbfaf (operator) has been started and output is visible here. 2025-07-06 19:42:32.493165 | orchestrator | 2025-07-06 19:42:32.496047 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-06 19:42:32.496099 | orchestrator | 2025-07-06 19:42:32.496112 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:42:32.497116 | orchestrator | Sunday 06 July 2025 19:42:32 +0000 (0:00:00.149) 0:00:00.149 *********** 2025-07-06 19:42:35.674696 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:42:35.674870 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:35.675349 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:35.676085 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:35.676987 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:42:35.677175 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:42:35.678830 | orchestrator | 2025-07-06 19:42:35.679235 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-06 19:42:35.679896 | orchestrator | Sunday 06 July 2025 19:42:35 +0000 (0:00:03.184) 0:00:03.333 *********** 2025-07-06 19:42:36.478714 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:36.479203 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:42:36.480269 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:36.481004 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:42:36.481732 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:42:36.482784 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:36.483757 | orchestrator | 2025-07-06 19:42:36.485512 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-06 19:42:36.486265 | orchestrator | 2025-07-06 19:42:36.487039 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-06 19:42:36.487798 | orchestrator | Sunday 06 July 2025 19:42:36 +0000 (0:00:00.802) 0:00:04.136 *********** 2025-07-06 19:42:36.571525 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:42:36.611188 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:42:36.635836 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:42:36.674655 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:36.674797 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:36.675114 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:36.675660 | orchestrator | 2025-07-06 19:42:36.676011 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-06 19:42:36.677819 | orchestrator | Sunday 06 July 2025 19:42:36 +0000 (0:00:00.197) 0:00:04.333 *********** 2025-07-06 19:42:36.735812 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:42:36.758884 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:42:36.788347 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:42:36.840140 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:36.840648 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:36.842306 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:36.842966 | orchestrator | 2025-07-06 19:42:36.844196 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-06 19:42:36.844970 | orchestrator | Sunday 06 July 2025 19:42:36 +0000 (0:00:00.165) 0:00:04.499 *********** 2025-07-06 19:42:37.450618 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:37.450690 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:37.450696 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:37.452394 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:37.452904 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:37.453898 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:37.454392 | orchestrator | 2025-07-06 19:42:37.456661 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-06 19:42:37.456678 | orchestrator | Sunday 06 July 2025 19:42:37 +0000 (0:00:00.601) 0:00:05.100 *********** 2025-07-06 19:42:38.267924 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:38.268925 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:38.268961 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:38.269942 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:38.271101 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:38.271930 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:38.273535 | orchestrator | 2025-07-06 19:42:38.274954 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-06 19:42:38.275655 | orchestrator | Sunday 06 July 2025 19:42:38 +0000 (0:00:00.822) 0:00:05.923 *********** 2025-07-06 19:42:39.455774 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-06 19:42:39.460335 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-06 19:42:39.460420 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-06 19:42:39.460434 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-06 19:42:39.460445 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-06 19:42:39.461592 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-06 19:42:39.462710 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-06 19:42:39.463518 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-06 19:42:39.464434 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-06 19:42:39.464895 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-06 19:42:39.465581 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-06 19:42:39.466214 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-06 19:42:39.466827 | orchestrator | 2025-07-06 19:42:39.467315 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-06 19:42:39.467844 | orchestrator | Sunday 06 July 2025 19:42:39 +0000 (0:00:01.188) 0:00:07.111 *********** 2025-07-06 19:42:40.676613 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:40.676742 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:40.676937 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:40.678105 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:40.678835 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:40.679772 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:40.680767 | orchestrator | 2025-07-06 19:42:40.682463 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-06 19:42:40.683095 | orchestrator | Sunday 06 July 2025 19:42:40 +0000 (0:00:01.220) 0:00:08.332 *********** 2025-07-06 19:42:41.837015 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-06 19:42:41.839862 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-06 19:42:41.839916 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-06 19:42:41.948684 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:42:41.949080 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:42:41.949683 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:42:41.950259 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:42:41.951026 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:42:41.952586 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:42:41.955427 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-06 19:42:41.955632 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-06 19:42:41.956687 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-06 19:42:41.958088 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-06 19:42:41.959255 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-06 19:42:41.959503 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-06 19:42:41.960335 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:42:41.961096 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:42:41.961569 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:42:41.962281 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:42:41.962774 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:42:41.965203 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:42:41.965265 | orchestrator | 2025-07-06 19:42:41.965369 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-06 19:42:41.966005 | orchestrator | Sunday 06 July 2025 19:42:41 +0000 (0:00:01.273) 0:00:09.606 *********** 2025-07-06 19:42:42.516054 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:42.516276 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:42.516741 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:42.517542 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:42.519203 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:42.519732 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:42.520629 | orchestrator | 2025-07-06 19:42:42.521540 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-06 19:42:42.522587 | orchestrator | Sunday 06 July 2025 19:42:42 +0000 (0:00:00.568) 0:00:10.174 *********** 2025-07-06 19:42:42.589413 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:42:42.623868 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:42:42.687494 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:42:42.687579 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:42:42.687918 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:42:42.688167 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:42:42.688754 | orchestrator | 2025-07-06 19:42:42.689222 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-06 19:42:42.691027 | orchestrator | Sunday 06 July 2025 19:42:42 +0000 (0:00:00.171) 0:00:10.345 *********** 2025-07-06 19:42:43.381199 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 19:42:43.382337 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-06 19:42:43.384132 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:43.384167 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 19:42:43.384180 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:43.384976 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:43.385875 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 19:42:43.386340 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-06 19:42:43.386863 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:43.387586 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:43.389006 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 19:42:43.389398 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:43.389762 | orchestrator | 2025-07-06 19:42:43.390461 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-06 19:42:43.390978 | orchestrator | Sunday 06 July 2025 19:42:43 +0000 (0:00:00.694) 0:00:11.040 *********** 2025-07-06 19:42:43.443555 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:42:43.461691 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:42:43.510725 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:42:43.544635 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:42:43.544823 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:42:43.546201 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:42:43.546932 | orchestrator | 2025-07-06 19:42:43.547802 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-06 19:42:43.548325 | orchestrator | Sunday 06 July 2025 19:42:43 +0000 (0:00:00.162) 0:00:11.202 *********** 2025-07-06 19:42:43.592813 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:42:43.613741 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:42:43.637939 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:42:43.703078 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:42:43.703479 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:42:43.705660 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:42:43.706567 | orchestrator | 2025-07-06 19:42:43.707669 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-06 19:42:43.710078 | orchestrator | Sunday 06 July 2025 19:42:43 +0000 (0:00:00.157) 0:00:11.360 *********** 2025-07-06 19:42:43.782637 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:42:43.804688 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:42:43.830013 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:42:43.861683 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:42:43.862176 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:42:43.863557 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:42:43.864319 | orchestrator | 2025-07-06 19:42:43.865129 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-06 19:42:43.866325 | orchestrator | Sunday 06 July 2025 19:42:43 +0000 (0:00:00.159) 0:00:11.520 *********** 2025-07-06 19:42:44.501368 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:44.502795 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:44.503135 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:44.503674 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:44.504531 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:44.505251 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:44.505898 | orchestrator | 2025-07-06 19:42:44.506676 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-06 19:42:44.507714 | orchestrator | Sunday 06 July 2025 19:42:44 +0000 (0:00:00.639) 0:00:12.159 *********** 2025-07-06 19:42:44.588291 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:42:44.617541 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:42:44.717561 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:42:44.718193 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:42:44.718818 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:42:44.720000 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:42:44.721045 | orchestrator | 2025-07-06 19:42:44.722432 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:42:44.722814 | orchestrator | 2025-07-06 19:42:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:42:44.723565 | orchestrator | 2025-07-06 19:42:44 | INFO  | Please wait and do not abort execution. 2025-07-06 19:42:44.724720 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:42:44.725830 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:42:44.728271 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:42:44.729436 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:42:44.730465 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:42:44.731298 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:42:44.732096 | orchestrator | 2025-07-06 19:42:44.732745 | orchestrator | 2025-07-06 19:42:44.733789 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:42:44.734290 | orchestrator | Sunday 06 July 2025 19:42:44 +0000 (0:00:00.216) 0:00:12.375 *********** 2025-07-06 19:42:44.734766 | orchestrator | =============================================================================== 2025-07-06 19:42:44.735284 | orchestrator | Gathering Facts --------------------------------------------------------- 3.18s 2025-07-06 19:42:44.735741 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-07-06 19:42:44.736292 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-07-06 19:42:44.736918 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-07-06 19:42:44.737309 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-07-06 19:42:44.737622 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2025-07-06 19:42:44.738201 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-07-06 19:42:44.738633 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-07-06 19:42:44.739090 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-07-06 19:42:44.739561 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-07-06 19:42:44.740153 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-07-06 19:42:44.740238 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-07-06 19:42:44.740666 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-07-06 19:42:44.741051 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-07-06 19:42:44.741435 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-07-06 19:42:44.741743 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-07-06 19:42:44.742355 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-07-06 19:42:45.268425 | orchestrator | + osism apply --environment custom facts 2025-07-06 19:42:47.015647 | orchestrator | 2025-07-06 19:42:47 | INFO  | Trying to run play facts in environment custom 2025-07-06 19:42:47.020512 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:42:47.020634 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:42:47.020651 | orchestrator | Registering Redlock._release_script 2025-07-06 19:42:47.079023 | orchestrator | 2025-07-06 19:42:47 | INFO  | Task 23858a04-5a2e-47e4-80c9-5aca693bcfb6 (facts) was prepared for execution. 2025-07-06 19:42:47.079110 | orchestrator | 2025-07-06 19:42:47 | INFO  | It takes a moment until task 23858a04-5a2e-47e4-80c9-5aca693bcfb6 (facts) has been started and output is visible here. 2025-07-06 19:42:50.868603 | orchestrator | 2025-07-06 19:42:50.871417 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-06 19:42:50.873587 | orchestrator | 2025-07-06 19:42:50.874459 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-06 19:42:50.875249 | orchestrator | Sunday 06 July 2025 19:42:50 +0000 (0:00:00.085) 0:00:00.085 *********** 2025-07-06 19:42:52.296206 | orchestrator | ok: [testbed-manager] 2025-07-06 19:42:52.297231 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:52.297286 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:52.297327 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:52.298451 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:52.298841 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:52.300057 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:52.300769 | orchestrator | 2025-07-06 19:42:52.301372 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-06 19:42:52.302343 | orchestrator | Sunday 06 July 2025 19:42:52 +0000 (0:00:01.427) 0:00:01.512 *********** 2025-07-06 19:42:53.393065 | orchestrator | ok: [testbed-manager] 2025-07-06 19:42:53.395229 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:53.395300 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:42:53.396171 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:53.396999 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:42:53.397820 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:42:53.398599 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:53.399327 | orchestrator | 2025-07-06 19:42:53.400060 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-06 19:42:53.401019 | orchestrator | 2025-07-06 19:42:53.401821 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-06 19:42:53.402276 | orchestrator | Sunday 06 July 2025 19:42:53 +0000 (0:00:01.099) 0:00:02.612 *********** 2025-07-06 19:42:53.534908 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:53.537154 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:53.537833 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:53.538716 | orchestrator | 2025-07-06 19:42:53.542991 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-06 19:42:53.543447 | orchestrator | Sunday 06 July 2025 19:42:53 +0000 (0:00:00.141) 0:00:02.753 *********** 2025-07-06 19:42:53.740937 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:53.741039 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:53.741138 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:53.741431 | orchestrator | 2025-07-06 19:42:53.742130 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-06 19:42:53.742265 | orchestrator | Sunday 06 July 2025 19:42:53 +0000 (0:00:00.207) 0:00:02.961 *********** 2025-07-06 19:42:53.953476 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:53.954248 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:53.954995 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:53.956854 | orchestrator | 2025-07-06 19:42:53.956878 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-06 19:42:53.957756 | orchestrator | Sunday 06 July 2025 19:42:53 +0000 (0:00:00.211) 0:00:03.173 *********** 2025-07-06 19:42:54.102501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:42:54.107034 | orchestrator | 2025-07-06 19:42:54.107079 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-06 19:42:54.107092 | orchestrator | Sunday 06 July 2025 19:42:54 +0000 (0:00:00.148) 0:00:03.321 *********** 2025-07-06 19:42:54.538082 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:54.538256 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:54.539141 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:54.539814 | orchestrator | 2025-07-06 19:42:54.540118 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-06 19:42:54.540809 | orchestrator | Sunday 06 July 2025 19:42:54 +0000 (0:00:00.437) 0:00:03.758 *********** 2025-07-06 19:42:54.660036 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:42:54.660343 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:42:54.661383 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:42:54.661995 | orchestrator | 2025-07-06 19:42:54.663096 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-06 19:42:54.663287 | orchestrator | Sunday 06 July 2025 19:42:54 +0000 (0:00:00.121) 0:00:03.880 *********** 2025-07-06 19:42:55.701677 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:55.701785 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:55.703656 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:55.704628 | orchestrator | 2025-07-06 19:42:55.705415 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-06 19:42:55.706282 | orchestrator | Sunday 06 July 2025 19:42:55 +0000 (0:00:01.040) 0:00:04.920 *********** 2025-07-06 19:42:56.165302 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:42:56.165531 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:42:56.166453 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:42:56.167741 | orchestrator | 2025-07-06 19:42:56.168633 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-06 19:42:56.169135 | orchestrator | Sunday 06 July 2025 19:42:56 +0000 (0:00:00.462) 0:00:05.382 *********** 2025-07-06 19:42:57.191722 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:42:57.192656 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:42:57.193772 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:42:57.194994 | orchestrator | 2025-07-06 19:42:57.196437 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-06 19:42:57.197375 | orchestrator | Sunday 06 July 2025 19:42:57 +0000 (0:00:01.028) 0:00:06.411 *********** 2025-07-06 19:43:09.918802 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:09.918924 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:09.918940 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:09.920541 | orchestrator | 2025-07-06 19:43:09.922198 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-06 19:43:09.923174 | orchestrator | Sunday 06 July 2025 19:43:09 +0000 (0:00:12.722) 0:00:19.133 *********** 2025-07-06 19:43:10.021830 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:10.022363 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:10.023784 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:10.024463 | orchestrator | 2025-07-06 19:43:10.025471 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-06 19:43:10.025892 | orchestrator | Sunday 06 July 2025 19:43:10 +0000 (0:00:00.107) 0:00:19.241 *********** 2025-07-06 19:43:16.963797 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:16.964134 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:16.966186 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:16.966658 | orchestrator | 2025-07-06 19:43:16.968865 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-06 19:43:16.970159 | orchestrator | Sunday 06 July 2025 19:43:16 +0000 (0:00:06.940) 0:00:26.182 *********** 2025-07-06 19:43:17.388088 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:17.388257 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:17.389708 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:17.392104 | orchestrator | 2025-07-06 19:43:17.393289 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-06 19:43:17.394763 | orchestrator | Sunday 06 July 2025 19:43:17 +0000 (0:00:00.424) 0:00:26.607 *********** 2025-07-06 19:43:20.812618 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-06 19:43:20.812813 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-06 19:43:20.813710 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-06 19:43:20.816196 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-06 19:43:20.817577 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-06 19:43:20.818542 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-06 19:43:20.819763 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-06 19:43:20.820190 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-06 19:43:20.820651 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-06 19:43:20.821826 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-06 19:43:20.822080 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-06 19:43:20.823016 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-06 19:43:20.823660 | orchestrator | 2025-07-06 19:43:20.824311 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-06 19:43:20.825494 | orchestrator | Sunday 06 July 2025 19:43:20 +0000 (0:00:03.423) 0:00:30.030 *********** 2025-07-06 19:43:21.926175 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:21.926281 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:21.926799 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:21.927390 | orchestrator | 2025-07-06 19:43:21.928555 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:43:21.929123 | orchestrator | 2025-07-06 19:43:21.929770 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:43:21.930141 | orchestrator | Sunday 06 July 2025 19:43:21 +0000 (0:00:01.113) 0:00:31.144 *********** 2025-07-06 19:43:25.583497 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:25.584709 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:25.585483 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:25.586724 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:25.587458 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:25.588282 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:25.589371 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:25.590176 | orchestrator | 2025-07-06 19:43:25.591165 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:43:25.591597 | orchestrator | 2025-07-06 19:43:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:43:25.592058 | orchestrator | 2025-07-06 19:43:25 | INFO  | Please wait and do not abort execution. 2025-07-06 19:43:25.593216 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:43:25.597134 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:43:25.597163 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:43:25.598828 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:43:25.599641 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:43:25.599941 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:43:25.600639 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:43:25.601063 | orchestrator | 2025-07-06 19:43:25.601492 | orchestrator | 2025-07-06 19:43:25.602117 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:43:25.602364 | orchestrator | Sunday 06 July 2025 19:43:25 +0000 (0:00:03.660) 0:00:34.804 *********** 2025-07-06 19:43:25.602810 | orchestrator | =============================================================================== 2025-07-06 19:43:25.603159 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.72s 2025-07-06 19:43:25.603595 | orchestrator | Install required packages (Debian) -------------------------------------- 6.94s 2025-07-06 19:43:25.604143 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.66s 2025-07-06 19:43:25.604301 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-07-06 19:43:25.604740 | orchestrator | Create custom facts directory ------------------------------------------- 1.43s 2025-07-06 19:43:25.604989 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.11s 2025-07-06 19:43:25.605409 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2025-07-06 19:43:25.605769 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-07-06 19:43:25.606114 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-07-06 19:43:25.606475 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-07-06 19:43:25.606781 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-07-06 19:43:25.607008 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-07-06 19:43:25.607306 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-07-06 19:43:25.607652 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-07-06 19:43:25.607984 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-07-06 19:43:25.608218 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-07-06 19:43:25.608682 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-07-06 19:43:25.608859 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-07-06 19:43:26.025782 | orchestrator | + osism apply bootstrap 2025-07-06 19:43:27.708493 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:43:27.708607 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:43:27.708623 | orchestrator | Registering Redlock._release_script 2025-07-06 19:43:27.764691 | orchestrator | 2025-07-06 19:43:27 | INFO  | Task a9afef5d-4836-4f59-a717-7980cf9b15de (bootstrap) was prepared for execution. 2025-07-06 19:43:27.764788 | orchestrator | 2025-07-06 19:43:27 | INFO  | It takes a moment until task a9afef5d-4836-4f59-a717-7980cf9b15de (bootstrap) has been started and output is visible here. 2025-07-06 19:43:31.756637 | orchestrator | 2025-07-06 19:43:31.757633 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-06 19:43:31.761007 | orchestrator | 2025-07-06 19:43:31.762359 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-06 19:43:31.763609 | orchestrator | Sunday 06 July 2025 19:43:31 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-07-06 19:43:31.829007 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:31.855539 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:31.881039 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:31.916838 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:31.989498 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:31.990778 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:31.990816 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:31.991240 | orchestrator | 2025-07-06 19:43:31.993769 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:43:31.995852 | orchestrator | 2025-07-06 19:43:31.997750 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:43:31.999236 | orchestrator | Sunday 06 July 2025 19:43:31 +0000 (0:00:00.235) 0:00:00.393 *********** 2025-07-06 19:43:35.750209 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:35.751225 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:35.751480 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:35.754215 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:35.754892 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:35.755383 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:35.756012 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:35.756605 | orchestrator | 2025-07-06 19:43:35.757404 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-06 19:43:35.758181 | orchestrator | 2025-07-06 19:43:35.758717 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:43:35.759293 | orchestrator | Sunday 06 July 2025 19:43:35 +0000 (0:00:03.760) 0:00:04.154 *********** 2025-07-06 19:43:35.867536 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-06 19:43:35.868382 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-06 19:43:35.868727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-06 19:43:35.869094 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-06 19:43:35.869572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 19:43:35.870103 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-06 19:43:35.870286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 19:43:35.905574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-06 19:43:35.905790 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-06 19:43:35.906114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 19:43:35.906501 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-06 19:43:35.906963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 19:43:35.946710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-06 19:43:35.946794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 19:43:35.947030 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-06 19:43:35.948830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 19:43:35.949323 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-06 19:43:35.950458 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-06 19:43:36.219576 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:36.219739 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:36.220482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-06 19:43:36.221659 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-06 19:43:36.221683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-06 19:43:36.222276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-06 19:43:36.222818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-06 19:43:36.223243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-06 19:43:36.224073 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-06 19:43:36.224843 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-06 19:43:36.227913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-06 19:43:36.227974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-06 19:43:36.227987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-06 19:43:36.227998 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-06 19:43:36.231070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-06 19:43:36.231150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-06 19:43:36.231163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-06 19:43:36.231174 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-06 19:43:36.231185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 19:43:36.231196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-06 19:43:36.231263 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:36.231820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-06 19:43:36.232163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 19:43:36.232872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-06 19:43:36.233173 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:36.233605 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-06 19:43:36.233926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 19:43:36.234536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-06 19:43:36.234835 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:43:36.235592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-06 19:43:36.235895 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-06 19:43:36.236097 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-06 19:43:36.237276 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:43:36.237299 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-06 19:43:36.237311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-06 19:43:36.237411 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-06 19:43:36.237800 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-06 19:43:36.238127 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:43:36.238670 | orchestrator | 2025-07-06 19:43:36.238696 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-06 19:43:36.239141 | orchestrator | 2025-07-06 19:43:36.239220 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-06 19:43:36.239623 | orchestrator | Sunday 06 July 2025 19:43:36 +0000 (0:00:00.468) 0:00:04.622 *********** 2025-07-06 19:43:38.387840 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:38.388050 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:38.388608 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:38.389963 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:38.390067 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:38.390093 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:38.390218 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:38.391405 | orchestrator | 2025-07-06 19:43:38.392031 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-06 19:43:38.392298 | orchestrator | Sunday 06 July 2025 19:43:38 +0000 (0:00:02.167) 0:00:06.790 *********** 2025-07-06 19:43:39.597636 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:39.597812 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:39.598656 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:39.599831 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:39.603376 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:39.603403 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:39.603415 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:39.603427 | orchestrator | 2025-07-06 19:43:39.604027 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-06 19:43:39.604634 | orchestrator | Sunday 06 July 2025 19:43:39 +0000 (0:00:01.208) 0:00:07.998 *********** 2025-07-06 19:43:39.839951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:43:39.841223 | orchestrator | 2025-07-06 19:43:39.842201 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-06 19:43:39.843989 | orchestrator | Sunday 06 July 2025 19:43:39 +0000 (0:00:00.244) 0:00:08.242 *********** 2025-07-06 19:43:42.468891 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:42.469063 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:42.470403 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:42.473814 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:42.474938 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:42.477201 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:42.478177 | orchestrator | changed: [testbed-manager] 2025-07-06 19:43:42.479405 | orchestrator | 2025-07-06 19:43:42.480730 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-06 19:43:42.481847 | orchestrator | Sunday 06 July 2025 19:43:42 +0000 (0:00:02.625) 0:00:10.868 *********** 2025-07-06 19:43:42.541637 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:42.735371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:43:42.736258 | orchestrator | 2025-07-06 19:43:42.736917 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-06 19:43:42.737701 | orchestrator | Sunday 06 July 2025 19:43:42 +0000 (0:00:00.270) 0:00:11.139 *********** 2025-07-06 19:43:43.719778 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:43.721172 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:43.722170 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:43.725085 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:43.725925 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:43.727477 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:43.727623 | orchestrator | 2025-07-06 19:43:43.728589 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-06 19:43:43.729419 | orchestrator | Sunday 06 July 2025 19:43:43 +0000 (0:00:00.982) 0:00:12.121 *********** 2025-07-06 19:43:43.786807 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:44.254177 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:44.254819 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:44.255160 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:44.255889 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:44.256410 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:44.256709 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:44.257183 | orchestrator | 2025-07-06 19:43:44.257874 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-06 19:43:44.258239 | orchestrator | Sunday 06 July 2025 19:43:44 +0000 (0:00:00.535) 0:00:12.656 *********** 2025-07-06 19:43:44.369248 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:44.390769 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:44.418956 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:44.675379 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:43:44.679342 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:43:44.679399 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:43:44.679412 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:44.680355 | orchestrator | 2025-07-06 19:43:44.680558 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-06 19:43:44.682313 | orchestrator | Sunday 06 July 2025 19:43:44 +0000 (0:00:00.418) 0:00:13.075 *********** 2025-07-06 19:43:44.759186 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:44.790957 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:44.809356 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:44.831981 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:44.916141 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:43:44.917777 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:43:44.919164 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:43:44.920837 | orchestrator | 2025-07-06 19:43:44.920882 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-06 19:43:44.921188 | orchestrator | Sunday 06 July 2025 19:43:44 +0000 (0:00:00.243) 0:00:13.319 *********** 2025-07-06 19:43:45.230205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:43:45.231905 | orchestrator | 2025-07-06 19:43:45.233265 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-06 19:43:45.233998 | orchestrator | Sunday 06 July 2025 19:43:45 +0000 (0:00:00.313) 0:00:13.633 *********** 2025-07-06 19:43:45.585289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:43:45.588975 | orchestrator | 2025-07-06 19:43:45.589035 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-06 19:43:45.589870 | orchestrator | Sunday 06 July 2025 19:43:45 +0000 (0:00:00.351) 0:00:13.984 *********** 2025-07-06 19:43:46.718310 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:46.718426 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:46.719296 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:46.721158 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:46.721791 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:46.722538 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:46.723206 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:46.723832 | orchestrator | 2025-07-06 19:43:46.724318 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-06 19:43:46.724945 | orchestrator | Sunday 06 July 2025 19:43:46 +0000 (0:00:01.132) 0:00:15.117 *********** 2025-07-06 19:43:46.790710 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:46.815252 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:46.846870 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:46.870094 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:46.941254 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:43:46.941336 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:43:46.942113 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:43:46.942565 | orchestrator | 2025-07-06 19:43:46.943502 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-06 19:43:46.943799 | orchestrator | Sunday 06 July 2025 19:43:46 +0000 (0:00:00.226) 0:00:15.343 *********** 2025-07-06 19:43:47.531192 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:47.532566 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:47.533183 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:47.535175 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:47.535206 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:47.535685 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:47.536770 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:47.536981 | orchestrator | 2025-07-06 19:43:47.537657 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-06 19:43:47.538144 | orchestrator | Sunday 06 July 2025 19:43:47 +0000 (0:00:00.566) 0:00:15.910 *********** 2025-07-06 19:43:47.598925 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:47.628534 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:47.665183 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:47.694875 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:47.774801 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:43:47.775689 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:43:47.776557 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:43:47.779793 | orchestrator | 2025-07-06 19:43:47.779843 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-06 19:43:47.779857 | orchestrator | Sunday 06 July 2025 19:43:47 +0000 (0:00:00.267) 0:00:16.178 *********** 2025-07-06 19:43:48.331851 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:48.332019 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:48.333549 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:48.333848 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:48.334406 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:48.335308 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:48.335947 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:48.336910 | orchestrator | 2025-07-06 19:43:48.337169 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-06 19:43:48.338099 | orchestrator | Sunday 06 July 2025 19:43:48 +0000 (0:00:00.556) 0:00:16.734 *********** 2025-07-06 19:43:49.432983 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:49.433055 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:49.433092 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:49.433708 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:49.434539 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:49.435654 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:49.436333 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:49.437149 | orchestrator | 2025-07-06 19:43:49.437822 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-06 19:43:49.438066 | orchestrator | Sunday 06 July 2025 19:43:49 +0000 (0:00:01.096) 0:00:17.831 *********** 2025-07-06 19:43:50.536803 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:50.537112 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:50.537803 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:50.539243 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:50.540736 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:50.543418 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:50.544341 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:50.544939 | orchestrator | 2025-07-06 19:43:50.545770 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-06 19:43:50.546391 | orchestrator | Sunday 06 July 2025 19:43:50 +0000 (0:00:01.106) 0:00:18.937 *********** 2025-07-06 19:43:50.946012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:43:50.946164 | orchestrator | 2025-07-06 19:43:50.946246 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-06 19:43:50.947894 | orchestrator | Sunday 06 July 2025 19:43:50 +0000 (0:00:00.408) 0:00:19.345 *********** 2025-07-06 19:43:51.023732 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:52.183835 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:43:52.184839 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:43:52.186703 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:43:52.188075 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:52.189837 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:52.191322 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:52.191945 | orchestrator | 2025-07-06 19:43:52.193402 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-06 19:43:52.194281 | orchestrator | Sunday 06 July 2025 19:43:52 +0000 (0:00:01.237) 0:00:20.583 *********** 2025-07-06 19:43:52.258281 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:52.274473 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:52.322345 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:52.385697 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:52.386133 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:52.387496 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:52.388848 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:52.389261 | orchestrator | 2025-07-06 19:43:52.390693 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-06 19:43:52.391037 | orchestrator | Sunday 06 July 2025 19:43:52 +0000 (0:00:00.204) 0:00:20.787 *********** 2025-07-06 19:43:52.462814 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:52.489171 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:52.516056 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:52.540213 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:52.633941 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:52.634615 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:52.635714 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:52.636839 | orchestrator | 2025-07-06 19:43:52.638349 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-06 19:43:52.638533 | orchestrator | Sunday 06 July 2025 19:43:52 +0000 (0:00:00.248) 0:00:21.036 *********** 2025-07-06 19:43:52.711292 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:52.759132 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:52.786287 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:52.860529 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:52.861012 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:52.862356 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:52.863536 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:52.864265 | orchestrator | 2025-07-06 19:43:52.865408 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-06 19:43:52.866840 | orchestrator | Sunday 06 July 2025 19:43:52 +0000 (0:00:00.226) 0:00:21.262 *********** 2025-07-06 19:43:53.184097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:43:53.184256 | orchestrator | 2025-07-06 19:43:53.185604 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-06 19:43:53.186752 | orchestrator | Sunday 06 July 2025 19:43:53 +0000 (0:00:00.323) 0:00:21.586 *********** 2025-07-06 19:43:53.723547 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:53.723706 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:53.726244 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:53.726651 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:53.728231 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:53.729112 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:53.729954 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:53.730740 | orchestrator | 2025-07-06 19:43:53.731723 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-06 19:43:53.732191 | orchestrator | Sunday 06 July 2025 19:43:53 +0000 (0:00:00.539) 0:00:22.125 *********** 2025-07-06 19:43:53.826712 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:43:53.850733 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:43:53.875869 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:43:53.933661 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:43:53.935642 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:43:53.936051 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:43:53.937702 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:43:53.938546 | orchestrator | 2025-07-06 19:43:53.939495 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-06 19:43:53.940163 | orchestrator | Sunday 06 July 2025 19:43:53 +0000 (0:00:00.210) 0:00:22.336 *********** 2025-07-06 19:43:54.998540 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:55.002250 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:55.003257 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:55.004105 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:55.004750 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:55.006148 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:55.007062 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:55.007242 | orchestrator | 2025-07-06 19:43:55.009677 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-06 19:43:55.010846 | orchestrator | Sunday 06 July 2025 19:43:54 +0000 (0:00:01.062) 0:00:23.399 *********** 2025-07-06 19:43:55.558557 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:55.559017 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:55.559366 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:55.560789 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:55.561539 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:43:55.562470 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:43:55.563243 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:43:55.564339 | orchestrator | 2025-07-06 19:43:55.565035 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-06 19:43:55.565350 | orchestrator | Sunday 06 July 2025 19:43:55 +0000 (0:00:00.561) 0:00:23.961 *********** 2025-07-06 19:43:56.695759 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:56.695980 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:43:56.695997 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:43:56.697203 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:43:56.697997 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:43:56.698608 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:43:56.699246 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:43:56.700004 | orchestrator | 2025-07-06 19:43:56.700739 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-06 19:43:56.701224 | orchestrator | Sunday 06 July 2025 19:43:56 +0000 (0:00:01.135) 0:00:25.096 *********** 2025-07-06 19:44:10.845915 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:10.846120 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:10.846149 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:10.846689 | orchestrator | changed: [testbed-manager] 2025-07-06 19:44:10.847627 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:10.848623 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:10.849440 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:10.850238 | orchestrator | 2025-07-06 19:44:10.850938 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-06 19:44:10.852123 | orchestrator | Sunday 06 July 2025 19:44:10 +0000 (0:00:14.145) 0:00:39.242 *********** 2025-07-06 19:44:10.918915 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:10.943793 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:10.969866 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:10.995775 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:11.050661 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:11.051554 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:11.052793 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:11.054199 | orchestrator | 2025-07-06 19:44:11.055657 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-06 19:44:11.056230 | orchestrator | Sunday 06 July 2025 19:44:11 +0000 (0:00:00.211) 0:00:39.453 *********** 2025-07-06 19:44:11.124943 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:11.152113 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:11.179686 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:11.203699 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:11.255556 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:11.255819 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:11.256295 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:11.257328 | orchestrator | 2025-07-06 19:44:11.257729 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-06 19:44:11.258711 | orchestrator | Sunday 06 July 2025 19:44:11 +0000 (0:00:00.205) 0:00:39.658 *********** 2025-07-06 19:44:11.359036 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:11.385989 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:11.415777 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:11.471772 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:11.472248 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:11.473315 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:11.474700 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:11.475628 | orchestrator | 2025-07-06 19:44:11.476434 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-06 19:44:11.477418 | orchestrator | Sunday 06 July 2025 19:44:11 +0000 (0:00:00.217) 0:00:39.876 *********** 2025-07-06 19:44:11.748244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:44:11.749075 | orchestrator | 2025-07-06 19:44:11.750118 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-06 19:44:11.752841 | orchestrator | Sunday 06 July 2025 19:44:11 +0000 (0:00:00.275) 0:00:40.151 *********** 2025-07-06 19:44:13.387872 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:13.390112 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:13.391747 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:13.393194 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:13.394661 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:13.396367 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:13.396539 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:13.397050 | orchestrator | 2025-07-06 19:44:13.397419 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-06 19:44:13.398602 | orchestrator | Sunday 06 July 2025 19:44:13 +0000 (0:00:01.637) 0:00:41.789 *********** 2025-07-06 19:44:14.464861 | orchestrator | changed: [testbed-manager] 2025-07-06 19:44:14.465807 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:14.466131 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:14.467280 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:14.468256 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:14.469102 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:14.469960 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:14.470506 | orchestrator | 2025-07-06 19:44:14.471107 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-06 19:44:14.471935 | orchestrator | Sunday 06 July 2025 19:44:14 +0000 (0:00:01.076) 0:00:42.865 *********** 2025-07-06 19:44:15.355405 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:15.356293 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:15.358058 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:15.358092 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:15.358843 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:15.360126 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:15.360434 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:15.361858 | orchestrator | 2025-07-06 19:44:15.362914 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-06 19:44:15.363422 | orchestrator | Sunday 06 July 2025 19:44:15 +0000 (0:00:00.892) 0:00:43.758 *********** 2025-07-06 19:44:15.645858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:44:15.646394 | orchestrator | 2025-07-06 19:44:15.647420 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-06 19:44:15.648834 | orchestrator | Sunday 06 July 2025 19:44:15 +0000 (0:00:00.288) 0:00:44.046 *********** 2025-07-06 19:44:16.712407 | orchestrator | changed: [testbed-manager] 2025-07-06 19:44:16.713331 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:16.713794 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:16.718432 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:16.720101 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:16.720355 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:16.721907 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:16.722208 | orchestrator | 2025-07-06 19:44:16.722813 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-06 19:44:16.723110 | orchestrator | Sunday 06 July 2025 19:44:16 +0000 (0:00:01.066) 0:00:45.113 *********** 2025-07-06 19:44:16.788601 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:44:16.818962 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:16.841222 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:16.867990 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:17.000570 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:17.000744 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:17.001428 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:17.001665 | orchestrator | 2025-07-06 19:44:17.002228 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-06 19:44:17.002983 | orchestrator | Sunday 06 July 2025 19:44:16 +0000 (0:00:00.290) 0:00:45.403 *********** 2025-07-06 19:44:28.460662 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:28.460800 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:28.460825 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:28.461006 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:28.462301 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:28.463364 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:28.464037 | orchestrator | changed: [testbed-manager] 2025-07-06 19:44:28.465215 | orchestrator | 2025-07-06 19:44:28.466223 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-06 19:44:28.466615 | orchestrator | Sunday 06 July 2025 19:44:28 +0000 (0:00:11.454) 0:00:56.857 *********** 2025-07-06 19:44:29.668337 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:29.669312 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:29.669382 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:29.669396 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:29.669409 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:29.669438 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:29.669450 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:29.669600 | orchestrator | 2025-07-06 19:44:29.669619 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-06 19:44:29.669853 | orchestrator | Sunday 06 July 2025 19:44:29 +0000 (0:00:01.211) 0:00:58.069 *********** 2025-07-06 19:44:31.494941 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:31.495048 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:31.495081 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:31.495208 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:31.495645 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:31.496120 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:31.496929 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:31.497350 | orchestrator | 2025-07-06 19:44:31.497777 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-06 19:44:31.498532 | orchestrator | Sunday 06 July 2025 19:44:31 +0000 (0:00:01.823) 0:00:59.892 *********** 2025-07-06 19:44:31.574687 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:31.600215 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:31.627188 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:31.656665 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:31.717313 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:31.718769 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:31.718807 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:31.722002 | orchestrator | 2025-07-06 19:44:31.722313 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-06 19:44:31.726931 | orchestrator | Sunday 06 July 2025 19:44:31 +0000 (0:00:00.228) 0:01:00.120 *********** 2025-07-06 19:44:31.803973 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:31.828052 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:31.854000 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:31.878166 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:31.932531 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:31.933627 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:31.933818 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:31.934765 | orchestrator | 2025-07-06 19:44:31.936682 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-06 19:44:31.936711 | orchestrator | Sunday 06 July 2025 19:44:31 +0000 (0:00:00.215) 0:01:00.336 *********** 2025-07-06 19:44:32.238205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:44:32.238308 | orchestrator | 2025-07-06 19:44:32.238337 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-06 19:44:32.239562 | orchestrator | Sunday 06 July 2025 19:44:32 +0000 (0:00:00.302) 0:01:00.638 *********** 2025-07-06 19:44:33.844771 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:33.846239 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:33.846369 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:33.846808 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:33.847355 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:33.848059 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:33.848585 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:33.849315 | orchestrator | 2025-07-06 19:44:33.849710 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-06 19:44:33.850264 | orchestrator | Sunday 06 July 2025 19:44:33 +0000 (0:00:01.606) 0:01:02.245 *********** 2025-07-06 19:44:34.403274 | orchestrator | changed: [testbed-manager] 2025-07-06 19:44:34.404177 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:34.405977 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:34.407127 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:34.407667 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:34.408187 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:34.408863 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:34.410078 | orchestrator | 2025-07-06 19:44:34.410261 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-06 19:44:34.410794 | orchestrator | Sunday 06 July 2025 19:44:34 +0000 (0:00:00.560) 0:01:02.805 *********** 2025-07-06 19:44:34.476793 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:34.504980 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:34.529441 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:34.554357 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:34.607698 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:34.608183 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:34.608908 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:34.609607 | orchestrator | 2025-07-06 19:44:34.610748 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-06 19:44:34.613788 | orchestrator | Sunday 06 July 2025 19:44:34 +0000 (0:00:00.205) 0:01:03.011 *********** 2025-07-06 19:44:35.768967 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:35.769076 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:35.769858 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:35.771341 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:35.772040 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:35.772731 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:35.773665 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:35.774731 | orchestrator | 2025-07-06 19:44:35.775002 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-06 19:44:35.775897 | orchestrator | Sunday 06 July 2025 19:44:35 +0000 (0:00:01.159) 0:01:04.170 *********** 2025-07-06 19:44:37.502115 | orchestrator | changed: [testbed-manager] 2025-07-06 19:44:37.503421 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:37.505163 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:37.505722 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:37.506943 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:37.507803 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:37.508204 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:37.509102 | orchestrator | 2025-07-06 19:44:37.509834 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-06 19:44:37.511127 | orchestrator | Sunday 06 July 2025 19:44:37 +0000 (0:00:01.733) 0:01:05.903 *********** 2025-07-06 19:44:39.802221 | orchestrator | ok: [testbed-manager] 2025-07-06 19:44:39.803158 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:39.805768 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:39.806970 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:39.808988 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:39.809011 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:39.809737 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:39.810874 | orchestrator | 2025-07-06 19:44:39.812419 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-06 19:44:39.813269 | orchestrator | Sunday 06 July 2025 19:44:39 +0000 (0:00:02.298) 0:01:08.202 *********** 2025-07-06 19:45:15.684876 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:15.684997 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:45:15.685013 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:15.685024 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:15.685098 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:15.685460 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:45:15.686552 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:45:15.687805 | orchestrator | 2025-07-06 19:45:15.688516 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-06 19:45:15.689321 | orchestrator | Sunday 06 July 2025 19:45:15 +0000 (0:00:35.877) 0:01:44.080 *********** 2025-07-06 19:46:31.985909 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:31.986114 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:31.986134 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:31.986146 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:31.986230 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:31.987056 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:31.987469 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:31.988140 | orchestrator | 2025-07-06 19:46:31.988708 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-06 19:46:31.989423 | orchestrator | Sunday 06 July 2025 19:46:31 +0000 (0:01:16.288) 0:03:00.368 *********** 2025-07-06 19:46:33.612017 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:33.612146 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:33.612377 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:33.616086 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:33.616186 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:33.616199 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:33.616209 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:33.616227 | orchestrator | 2025-07-06 19:46:33.616239 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-06 19:46:33.616299 | orchestrator | Sunday 06 July 2025 19:46:33 +0000 (0:00:01.644) 0:03:02.013 *********** 2025-07-06 19:46:45.995215 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:45.996521 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:45.998882 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:46.000072 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:46.000977 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:46.004596 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:46.005425 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:46.007043 | orchestrator | 2025-07-06 19:46:46.007367 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-06 19:46:46.008759 | orchestrator | Sunday 06 July 2025 19:46:45 +0000 (0:00:12.380) 0:03:14.394 *********** 2025-07-06 19:46:46.398417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-06 19:46:46.398535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-06 19:46:46.399719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-06 19:46:46.400510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-06 19:46:46.405386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-06 19:46:46.406000 | orchestrator | 2025-07-06 19:46:46.406895 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-06 19:46:46.407883 | orchestrator | Sunday 06 July 2025 19:46:46 +0000 (0:00:00.405) 0:03:14.800 *********** 2025-07-06 19:46:46.462310 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:46:46.494849 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:46.496247 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:46:46.533323 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:46.533713 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:46:46.567879 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:46:46.568372 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:46.588929 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:47.194374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 19:46:47.195407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 19:46:47.198260 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 19:46:47.199261 | orchestrator | 2025-07-06 19:46:47.199973 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-06 19:46:47.203794 | orchestrator | Sunday 06 July 2025 19:46:47 +0000 (0:00:00.794) 0:03:15.594 *********** 2025-07-06 19:46:47.311523 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:46:47.311748 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:46:47.311937 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:46:47.312811 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:46:47.313047 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:46:47.313556 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:46:47.314454 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:46:47.316233 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:46:47.317436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:46:47.318898 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:46:47.319691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:46:47.322873 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:46:47.323625 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:46:47.324325 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:46:47.325147 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:46:47.370695 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:46:47.370809 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:47.370835 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:46:47.370853 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:46:47.370870 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:46:47.371463 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:46:47.372349 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:46:47.373977 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:46:47.374013 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:46:47.374752 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:46:47.374937 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:46:47.375736 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:46:47.401063 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:46:47.401993 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:47.402079 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:46:47.402722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:46:47.403467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:46:47.438747 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:47.440242 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:46:47.440268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:46:47.440987 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:46:47.478988 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:46:47.479074 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:46:47.479088 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:46:47.479495 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:46:47.480079 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:46:47.480923 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:46:47.481302 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:46:53.251927 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:53.253322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-06 19:46:53.258427 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-06 19:46:53.258493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-06 19:46:53.258508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-06 19:46:53.259528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-06 19:46:53.260677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-06 19:46:53.262718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-06 19:46:53.263346 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-06 19:46:53.263901 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-06 19:46:53.265022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-06 19:46:53.266311 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-06 19:46:53.267285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-06 19:46:53.270302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-06 19:46:53.270810 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-06 19:46:53.270973 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-06 19:46:53.271261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-06 19:46:53.271907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-06 19:46:53.271932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-06 19:46:53.272134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-06 19:46:53.272390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-06 19:46:53.272795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-06 19:46:53.273066 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-06 19:46:53.273321 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-06 19:46:53.273599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-06 19:46:53.273927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-06 19:46:53.274169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-06 19:46:53.274419 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-06 19:46:53.274854 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-06 19:46:53.275211 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-06 19:46:53.275424 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-06 19:46:53.275725 | orchestrator | 2025-07-06 19:46:53.275953 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-06 19:46:53.276287 | orchestrator | Sunday 06 July 2025 19:46:53 +0000 (0:00:06.055) 0:03:21.650 *********** 2025-07-06 19:46:54.872507 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.872700 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.874368 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.875190 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.875691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.876204 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.878262 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:46:54.879135 | orchestrator | 2025-07-06 19:46:54.879443 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-06 19:46:54.880061 | orchestrator | Sunday 06 July 2025 19:46:54 +0000 (0:00:01.622) 0:03:23.273 *********** 2025-07-06 19:46:54.933193 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:46:54.970643 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:55.056892 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:46:55.376281 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:55.378130 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:46:55.379555 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:55.380546 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:46:55.382673 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:55.384752 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-06 19:46:55.386161 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-06 19:46:55.388925 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-06 19:46:55.389549 | orchestrator | 2025-07-06 19:46:55.390907 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-06 19:46:55.391872 | orchestrator | Sunday 06 July 2025 19:46:55 +0000 (0:00:00.505) 0:03:23.778 *********** 2025-07-06 19:46:55.419476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:46:55.448936 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:55.547820 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:46:55.969882 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:55.971569 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:46:55.973803 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:55.975158 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:46:55.976586 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:55.978106 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-06 19:46:55.978771 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-06 19:46:55.980022 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-06 19:46:55.981018 | orchestrator | 2025-07-06 19:46:55.981743 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-06 19:46:55.982318 | orchestrator | Sunday 06 July 2025 19:46:55 +0000 (0:00:00.593) 0:03:24.371 *********** 2025-07-06 19:46:56.067010 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:56.103108 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:56.128563 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:56.153463 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:56.287788 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:56.289579 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:56.291183 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:56.293363 | orchestrator | 2025-07-06 19:46:56.294181 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-06 19:46:56.295542 | orchestrator | Sunday 06 July 2025 19:46:56 +0000 (0:00:00.317) 0:03:24.688 *********** 2025-07-06 19:47:01.997735 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:01.997938 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:01.998673 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:02.001160 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:02.002851 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:02.003341 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:02.003855 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:02.004339 | orchestrator | 2025-07-06 19:47:02.005253 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-06 19:47:02.005459 | orchestrator | Sunday 06 July 2025 19:47:01 +0000 (0:00:05.710) 0:03:30.399 *********** 2025-07-06 19:47:02.072496 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-06 19:47:02.072602 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-06 19:47:02.107334 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:47:02.107529 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-06 19:47:02.140598 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:47:02.182488 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:47:02.183923 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-06 19:47:02.184997 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-06 19:47:02.214955 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:47:02.277105 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:47:02.278153 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-06 19:47:02.278987 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:47:02.279725 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-06 19:47:02.280595 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:47:02.281400 | orchestrator | 2025-07-06 19:47:02.282140 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-06 19:47:02.282781 | orchestrator | Sunday 06 July 2025 19:47:02 +0000 (0:00:00.280) 0:03:30.679 *********** 2025-07-06 19:47:03.353278 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-06 19:47:03.353445 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-06 19:47:03.354574 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-06 19:47:03.355814 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-06 19:47:03.356797 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-06 19:47:03.357845 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-06 19:47:03.358487 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-06 19:47:03.359373 | orchestrator | 2025-07-06 19:47:03.360223 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-06 19:47:03.361031 | orchestrator | Sunday 06 July 2025 19:47:03 +0000 (0:00:01.073) 0:03:31.753 *********** 2025-07-06 19:47:03.861163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:47:03.864792 | orchestrator | 2025-07-06 19:47:03.865174 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-06 19:47:03.865906 | orchestrator | Sunday 06 July 2025 19:47:03 +0000 (0:00:00.508) 0:03:32.261 *********** 2025-07-06 19:47:05.200836 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:05.201461 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:05.202530 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:05.204519 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:05.205250 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:05.207211 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:05.209978 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:05.211832 | orchestrator | 2025-07-06 19:47:05.211857 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-06 19:47:05.212291 | orchestrator | Sunday 06 July 2025 19:47:05 +0000 (0:00:01.339) 0:03:33.601 *********** 2025-07-06 19:47:06.583433 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:06.583646 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:06.584492 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:06.588832 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:06.589799 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:06.591117 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:06.591950 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:06.592680 | orchestrator | 2025-07-06 19:47:06.593832 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-06 19:47:06.594431 | orchestrator | Sunday 06 July 2025 19:47:06 +0000 (0:00:01.381) 0:03:34.983 *********** 2025-07-06 19:47:07.177524 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:07.178450 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:07.179825 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:07.179849 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:07.180059 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:07.180913 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:07.181425 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:07.181735 | orchestrator | 2025-07-06 19:47:07.181912 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-06 19:47:07.182402 | orchestrator | Sunday 06 July 2025 19:47:07 +0000 (0:00:00.595) 0:03:35.578 *********** 2025-07-06 19:47:07.856159 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:07.856760 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:07.857579 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:07.858518 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:07.859432 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:07.860225 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:07.860815 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:07.861560 | orchestrator | 2025-07-06 19:47:07.862387 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-06 19:47:07.863252 | orchestrator | Sunday 06 July 2025 19:47:07 +0000 (0:00:00.678) 0:03:36.256 *********** 2025-07-06 19:47:08.858914 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829938.0624804, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.859121 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829911.1059852, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.860367 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829926.1021786, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.862154 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829932.2028966, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.863784 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829856.861214, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.864539 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829935.8063638, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.865420 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829925.2636354, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.866200 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829830.4552987, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.866849 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829809.223284, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.867473 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829821.2600787, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.867819 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829887.3242083, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.868465 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829826.0114257, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.869098 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829833.322992, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.869835 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829817.8945334, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:47:08.870300 | orchestrator | 2025-07-06 19:47:08.870862 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-06 19:47:08.871374 | orchestrator | Sunday 06 July 2025 19:47:08 +0000 (0:00:01.002) 0:03:37.259 *********** 2025-07-06 19:47:10.020393 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:10.020538 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:10.021059 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:10.021858 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:10.022819 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:10.023284 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:10.024214 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:10.024775 | orchestrator | 2025-07-06 19:47:10.025395 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-06 19:47:10.025852 | orchestrator | Sunday 06 July 2025 19:47:10 +0000 (0:00:01.162) 0:03:38.421 *********** 2025-07-06 19:47:11.188603 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:11.188803 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:11.188819 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:11.190892 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:11.191597 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:11.192378 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:11.193381 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:11.194115 | orchestrator | 2025-07-06 19:47:11.194709 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-06 19:47:11.195249 | orchestrator | Sunday 06 July 2025 19:47:11 +0000 (0:00:01.164) 0:03:39.586 *********** 2025-07-06 19:47:12.322965 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:12.324652 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:12.325010 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:12.325967 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:12.326988 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:12.327921 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:12.328523 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:12.329520 | orchestrator | 2025-07-06 19:47:12.330148 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-06 19:47:12.330565 | orchestrator | Sunday 06 July 2025 19:47:12 +0000 (0:00:01.138) 0:03:40.724 *********** 2025-07-06 19:47:12.386353 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:47:12.419607 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:47:12.467882 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:47:12.500237 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:47:12.532318 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:47:12.584116 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:47:12.584327 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:47:12.586860 | orchestrator | 2025-07-06 19:47:12.586906 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-06 19:47:12.586919 | orchestrator | Sunday 06 July 2025 19:47:12 +0000 (0:00:00.261) 0:03:40.986 *********** 2025-07-06 19:47:13.358948 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:13.360013 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:13.361044 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:13.362241 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:13.363296 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:13.364034 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:13.364762 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:13.365704 | orchestrator | 2025-07-06 19:47:13.366905 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-06 19:47:13.367015 | orchestrator | Sunday 06 July 2025 19:47:13 +0000 (0:00:00.772) 0:03:41.758 *********** 2025-07-06 19:47:13.747619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:47:13.748516 | orchestrator | 2025-07-06 19:47:13.749578 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-06 19:47:13.750285 | orchestrator | Sunday 06 July 2025 19:47:13 +0000 (0:00:00.391) 0:03:42.150 *********** 2025-07-06 19:47:21.463489 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:21.464420 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:21.465234 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:21.466273 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:21.466775 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:21.467401 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:21.467934 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:21.469750 | orchestrator | 2025-07-06 19:47:21.470907 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-06 19:47:21.471874 | orchestrator | Sunday 06 July 2025 19:47:21 +0000 (0:00:07.711) 0:03:49.861 *********** 2025-07-06 19:47:22.748557 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:22.752812 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:22.753282 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:22.753995 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:22.755389 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:22.755799 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:22.756788 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:22.757070 | orchestrator | 2025-07-06 19:47:22.758145 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-06 19:47:22.758468 | orchestrator | Sunday 06 July 2025 19:47:22 +0000 (0:00:01.286) 0:03:51.148 *********** 2025-07-06 19:47:23.778581 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:23.778740 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:23.778824 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:23.778841 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:23.779105 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:23.781957 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:23.782003 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:23.782071 | orchestrator | 2025-07-06 19:47:23.785122 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-06 19:47:23.785532 | orchestrator | Sunday 06 July 2025 19:47:23 +0000 (0:00:01.031) 0:03:52.179 *********** 2025-07-06 19:47:24.275850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:47:24.275977 | orchestrator | 2025-07-06 19:47:24.276114 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-06 19:47:24.278007 | orchestrator | Sunday 06 July 2025 19:47:24 +0000 (0:00:00.496) 0:03:52.676 *********** 2025-07-06 19:47:33.062850 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:33.063040 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:33.064847 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:33.064875 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:33.064887 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:33.065228 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:33.065886 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:33.066252 | orchestrator | 2025-07-06 19:47:33.066780 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-06 19:47:33.068745 | orchestrator | Sunday 06 July 2025 19:47:33 +0000 (0:00:08.788) 0:04:01.464 *********** 2025-07-06 19:47:33.687212 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:33.687930 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:33.688848 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:33.689714 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:33.690448 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:33.691178 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:33.691988 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:33.692331 | orchestrator | 2025-07-06 19:47:33.693128 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-06 19:47:33.693585 | orchestrator | Sunday 06 July 2025 19:47:33 +0000 (0:00:00.625) 0:04:02.089 *********** 2025-07-06 19:47:34.885639 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:34.885870 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:34.887351 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:34.889460 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:34.890769 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:34.892024 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:34.892980 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:34.894079 | orchestrator | 2025-07-06 19:47:34.894919 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-06 19:47:34.895928 | orchestrator | Sunday 06 July 2025 19:47:34 +0000 (0:00:01.197) 0:04:03.287 *********** 2025-07-06 19:47:36.676152 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:47:36.676255 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:47:36.677979 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:47:36.681334 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:47:36.684047 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:47:36.684080 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:47:36.684093 | orchestrator | changed: [testbed-manager] 2025-07-06 19:47:36.684196 | orchestrator | 2025-07-06 19:47:36.685307 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-06 19:47:36.687828 | orchestrator | Sunday 06 July 2025 19:47:36 +0000 (0:00:01.788) 0:04:05.075 *********** 2025-07-06 19:47:36.781214 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:36.833171 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:36.868047 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:36.900749 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:36.967801 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:36.967903 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:36.969121 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:36.969998 | orchestrator | 2025-07-06 19:47:36.973585 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-06 19:47:36.974277 | orchestrator | Sunday 06 July 2025 19:47:36 +0000 (0:00:00.296) 0:04:05.371 *********** 2025-07-06 19:47:37.095551 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:37.139910 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:37.182773 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:37.214356 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:37.295933 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:37.297075 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:37.300121 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:37.300184 | orchestrator | 2025-07-06 19:47:37.300199 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-06 19:47:37.300212 | orchestrator | Sunday 06 July 2025 19:47:37 +0000 (0:00:00.326) 0:04:05.697 *********** 2025-07-06 19:47:37.396259 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:37.431267 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:37.465729 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:37.499915 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:37.610313 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:37.610487 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:37.610737 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:37.611164 | orchestrator | 2025-07-06 19:47:37.611499 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-06 19:47:37.612295 | orchestrator | Sunday 06 July 2025 19:47:37 +0000 (0:00:00.316) 0:04:06.014 *********** 2025-07-06 19:47:42.563513 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:47:42.563882 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:47:42.563916 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:47:42.564880 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:47:42.565262 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:47:42.568380 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:47:42.568766 | orchestrator | ok: [testbed-manager] 2025-07-06 19:47:42.569695 | orchestrator | 2025-07-06 19:47:42.570685 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-06 19:47:42.571577 | orchestrator | Sunday 06 July 2025 19:47:42 +0000 (0:00:04.952) 0:04:10.966 *********** 2025-07-06 19:47:42.946289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:47:42.947009 | orchestrator | 2025-07-06 19:47:42.950235 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-06 19:47:42.950293 | orchestrator | Sunday 06 July 2025 19:47:42 +0000 (0:00:00.381) 0:04:11.348 *********** 2025-07-06 19:47:43.022904 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.022996 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-06 19:47:43.025147 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.085753 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:47:43.086792 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-06 19:47:43.089950 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.089978 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-06 19:47:43.139756 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:47:43.143722 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.184250 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:47:43.185455 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-06 19:47:43.191619 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.192996 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-06 19:47:43.223634 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:47:43.224344 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.314577 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-06 19:47:43.315881 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:47:43.317472 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:47:43.317841 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-06 19:47:43.318952 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-06 19:47:43.320144 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:47:43.320239 | orchestrator | 2025-07-06 19:47:43.320394 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-06 19:47:43.321010 | orchestrator | Sunday 06 July 2025 19:47:43 +0000 (0:00:00.368) 0:04:11.716 *********** 2025-07-06 19:47:43.701512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:47:43.702197 | orchestrator | 2025-07-06 19:47:43.703153 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-06 19:47:43.706233 | orchestrator | Sunday 06 July 2025 19:47:43 +0000 (0:00:00.387) 0:04:12.104 *********** 2025-07-06 19:47:43.780515 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-06 19:47:43.780622 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-06 19:47:43.816444 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:47:43.854476 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:47:43.854617 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-06 19:47:43.891826 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-06 19:47:43.891959 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:47:43.934941 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-06 19:47:43.935093 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:47:44.025044 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-06 19:47:44.025186 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:47:44.026348 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:47:44.027782 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-06 19:47:44.028635 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:47:44.030273 | orchestrator | 2025-07-06 19:47:44.030436 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-06 19:47:44.031814 | orchestrator | Sunday 06 July 2025 19:47:44 +0000 (0:00:00.322) 0:04:12.426 *********** 2025-07-06 19:47:44.571370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:47:44.571481 | orchestrator | 2025-07-06 19:47:44.571991 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-06 19:47:44.573049 | orchestrator | Sunday 06 July 2025 19:47:44 +0000 (0:00:00.541) 0:04:12.968 *********** 2025-07-06 19:48:19.301976 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:19.302176 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:19.302193 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:19.302274 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:19.304975 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:19.305033 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:19.305046 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:19.306304 | orchestrator | 2025-07-06 19:48:19.306416 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-06 19:48:19.307499 | orchestrator | Sunday 06 July 2025 19:48:19 +0000 (0:00:34.732) 0:04:47.700 *********** 2025-07-06 19:48:27.541854 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:27.542821 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:27.543082 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:27.544116 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:27.545644 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:27.546205 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:27.546785 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:27.547324 | orchestrator | 2025-07-06 19:48:27.548116 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-06 19:48:27.548302 | orchestrator | Sunday 06 July 2025 19:48:27 +0000 (0:00:08.240) 0:04:55.941 *********** 2025-07-06 19:48:35.576990 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:35.577182 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:35.578818 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:35.581535 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:35.582160 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:35.582937 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:35.583860 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:35.584621 | orchestrator | 2025-07-06 19:48:35.586236 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-06 19:48:35.586550 | orchestrator | Sunday 06 July 2025 19:48:35 +0000 (0:00:08.035) 0:05:03.976 *********** 2025-07-06 19:48:37.172772 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:37.173941 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:37.175158 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:37.176502 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:37.177089 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:37.177832 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:37.178608 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:37.180056 | orchestrator | 2025-07-06 19:48:37.180089 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-06 19:48:37.180835 | orchestrator | Sunday 06 July 2025 19:48:37 +0000 (0:00:01.598) 0:05:05.574 *********** 2025-07-06 19:48:43.151667 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:43.151918 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:43.152103 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:43.153293 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:43.154962 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:43.155002 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:43.155369 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:43.156226 | orchestrator | 2025-07-06 19:48:43.156856 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-06 19:48:43.157470 | orchestrator | Sunday 06 July 2025 19:48:43 +0000 (0:00:05.978) 0:05:11.553 *********** 2025-07-06 19:48:43.573606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:48:43.573709 | orchestrator | 2025-07-06 19:48:43.574390 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-06 19:48:43.574944 | orchestrator | Sunday 06 July 2025 19:48:43 +0000 (0:00:00.422) 0:05:11.975 *********** 2025-07-06 19:48:44.331690 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:44.334197 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:44.334841 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:44.335481 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:44.336358 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:44.336872 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:44.339027 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:44.341765 | orchestrator | 2025-07-06 19:48:44.342329 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-06 19:48:44.343588 | orchestrator | Sunday 06 July 2025 19:48:44 +0000 (0:00:00.755) 0:05:12.731 *********** 2025-07-06 19:48:46.038924 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:46.039053 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:46.040579 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:46.041248 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:46.041957 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:46.042610 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:46.043461 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:46.043839 | orchestrator | 2025-07-06 19:48:46.044501 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-06 19:48:46.045216 | orchestrator | Sunday 06 July 2025 19:48:46 +0000 (0:00:01.705) 0:05:14.437 *********** 2025-07-06 19:48:46.797898 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:46.799004 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:46.799770 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:46.800412 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:46.800858 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:46.801375 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:46.802166 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:46.802479 | orchestrator | 2025-07-06 19:48:46.803083 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-06 19:48:46.803480 | orchestrator | Sunday 06 July 2025 19:48:46 +0000 (0:00:00.763) 0:05:15.200 *********** 2025-07-06 19:48:46.888901 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:46.937981 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:46.973153 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:47.005371 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:47.056495 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:47.056975 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:47.057717 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:47.059669 | orchestrator | 2025-07-06 19:48:47.059711 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-06 19:48:47.060087 | orchestrator | Sunday 06 July 2025 19:48:47 +0000 (0:00:00.258) 0:05:15.459 *********** 2025-07-06 19:48:47.118812 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:47.148070 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:47.181373 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:47.212180 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:47.242058 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:47.432451 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:47.432635 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:47.436245 | orchestrator | 2025-07-06 19:48:47.436318 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-06 19:48:47.436330 | orchestrator | Sunday 06 July 2025 19:48:47 +0000 (0:00:00.375) 0:05:15.834 *********** 2025-07-06 19:48:47.547877 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:47.584140 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:47.616993 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:47.652130 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:47.726196 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:47.726683 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:47.727298 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:47.728030 | orchestrator | 2025-07-06 19:48:47.728626 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-06 19:48:47.729152 | orchestrator | Sunday 06 July 2025 19:48:47 +0000 (0:00:00.295) 0:05:16.129 *********** 2025-07-06 19:48:47.835260 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:47.868458 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:47.901315 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:47.934334 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:48.003509 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:48.004342 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:48.006090 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:48.007113 | orchestrator | 2025-07-06 19:48:48.008306 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-06 19:48:48.009173 | orchestrator | Sunday 06 July 2025 19:48:47 +0000 (0:00:00.277) 0:05:16.406 *********** 2025-07-06 19:48:48.103964 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:48.140323 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:48.193397 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:48.229023 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:48.317881 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:48.318647 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:48.319634 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:48.320676 | orchestrator | 2025-07-06 19:48:48.321761 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-06 19:48:48.322284 | orchestrator | Sunday 06 July 2025 19:48:48 +0000 (0:00:00.311) 0:05:16.718 *********** 2025-07-06 19:48:48.427026 | orchestrator | ok: [testbed-manager] =>  2025-07-06 19:48:48.427427 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.458566 | orchestrator | ok: [testbed-node-3] =>  2025-07-06 19:48:48.459235 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.490270 | orchestrator | ok: [testbed-node-4] =>  2025-07-06 19:48:48.491005 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.524060 | orchestrator | ok: [testbed-node-5] =>  2025-07-06 19:48:48.524248 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.578273 | orchestrator | ok: [testbed-node-0] =>  2025-07-06 19:48:48.578503 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.578869 | orchestrator | ok: [testbed-node-1] =>  2025-07-06 19:48:48.579896 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.580861 | orchestrator | ok: [testbed-node-2] =>  2025-07-06 19:48:48.581581 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:48:48.582909 | orchestrator | 2025-07-06 19:48:48.583936 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-06 19:48:48.584055 | orchestrator | Sunday 06 July 2025 19:48:48 +0000 (0:00:00.262) 0:05:16.980 *********** 2025-07-06 19:48:48.703630 | orchestrator | ok: [testbed-manager] =>  2025-07-06 19:48:48.703798 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:48.846534 | orchestrator | ok: [testbed-node-3] =>  2025-07-06 19:48:48.846807 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:48.885538 | orchestrator | ok: [testbed-node-4] =>  2025-07-06 19:48:48.885786 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:48.925990 | orchestrator | ok: [testbed-node-5] =>  2025-07-06 19:48:48.926426 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:48.996637 | orchestrator | ok: [testbed-node-0] =>  2025-07-06 19:48:48.997576 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:48.999406 | orchestrator | ok: [testbed-node-1] =>  2025-07-06 19:48:49.000403 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:49.001275 | orchestrator | ok: [testbed-node-2] =>  2025-07-06 19:48:49.002825 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:48:49.004455 | orchestrator | 2025-07-06 19:48:49.006094 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-06 19:48:49.007249 | orchestrator | Sunday 06 July 2025 19:48:48 +0000 (0:00:00.418) 0:05:17.399 *********** 2025-07-06 19:48:49.075348 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:49.107050 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:49.138764 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:49.173147 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:49.202716 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:49.252190 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:49.252447 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:49.252906 | orchestrator | 2025-07-06 19:48:49.253423 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-06 19:48:49.254515 | orchestrator | Sunday 06 July 2025 19:48:49 +0000 (0:00:00.256) 0:05:17.656 *********** 2025-07-06 19:48:49.339329 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:49.377908 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:49.411691 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:49.450327 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:49.485077 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:49.554361 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:49.554555 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:49.558138 | orchestrator | 2025-07-06 19:48:49.558239 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-06 19:48:49.558257 | orchestrator | Sunday 06 July 2025 19:48:49 +0000 (0:00:00.300) 0:05:17.956 *********** 2025-07-06 19:48:49.969539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:48:49.969789 | orchestrator | 2025-07-06 19:48:49.970639 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-06 19:48:49.971069 | orchestrator | Sunday 06 July 2025 19:48:49 +0000 (0:00:00.415) 0:05:18.372 *********** 2025-07-06 19:48:50.784166 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:50.785999 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:50.787183 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:50.788098 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:50.789666 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:50.790150 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:50.791187 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:50.792071 | orchestrator | 2025-07-06 19:48:50.792675 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-06 19:48:50.793292 | orchestrator | Sunday 06 July 2025 19:48:50 +0000 (0:00:00.813) 0:05:19.185 *********** 2025-07-06 19:48:53.471332 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:53.471433 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:53.471512 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:53.472938 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:53.473378 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:53.474004 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:53.477414 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:53.478007 | orchestrator | 2025-07-06 19:48:53.478594 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-06 19:48:53.479653 | orchestrator | Sunday 06 July 2025 19:48:53 +0000 (0:00:02.686) 0:05:21.872 *********** 2025-07-06 19:48:53.541329 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-06 19:48:53.541430 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-06 19:48:53.616014 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-06 19:48:53.616116 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-06 19:48:53.616131 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-06 19:48:53.699381 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-06 19:48:53.699560 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:53.700825 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-06 19:48:53.701147 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-06 19:48:53.701459 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-06 19:48:53.775013 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:53.775624 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-06 19:48:53.776072 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-06 19:48:53.776641 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-06 19:48:53.991967 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:53.992196 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-06 19:48:53.996098 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-06 19:48:53.996281 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-06 19:48:54.065815 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:54.067289 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-06 19:48:54.070210 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-06 19:48:54.070284 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-06 19:48:54.200162 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:54.200957 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:54.202411 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-06 19:48:54.203366 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-06 19:48:54.205012 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-06 19:48:54.206823 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:54.207839 | orchestrator | 2025-07-06 19:48:54.208835 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-06 19:48:54.209426 | orchestrator | Sunday 06 July 2025 19:48:54 +0000 (0:00:00.729) 0:05:22.602 *********** 2025-07-06 19:49:00.238219 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:00.240122 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:00.240321 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:00.242011 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:00.242812 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:00.243662 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:00.244523 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:00.245069 | orchestrator | 2025-07-06 19:49:00.245567 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-06 19:49:00.246180 | orchestrator | Sunday 06 July 2025 19:49:00 +0000 (0:00:06.038) 0:05:28.640 *********** 2025-07-06 19:49:01.213124 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:01.213310 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:01.213920 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:01.214646 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:01.216439 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:01.217015 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:01.219893 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:01.220022 | orchestrator | 2025-07-06 19:49:01.221281 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-06 19:49:01.222454 | orchestrator | Sunday 06 July 2025 19:49:01 +0000 (0:00:00.973) 0:05:29.614 *********** 2025-07-06 19:49:09.277140 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:09.277257 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:09.278180 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:09.279361 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:09.281804 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:09.281930 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:09.282648 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:09.283339 | orchestrator | 2025-07-06 19:49:09.284055 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-06 19:49:09.284439 | orchestrator | Sunday 06 July 2025 19:49:09 +0000 (0:00:08.062) 0:05:37.677 *********** 2025-07-06 19:49:13.734159 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:13.734272 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:13.734662 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:13.735243 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:13.736371 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:13.737527 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:13.739929 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:13.740853 | orchestrator | 2025-07-06 19:49:13.742128 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-06 19:49:13.743511 | orchestrator | Sunday 06 July 2025 19:49:13 +0000 (0:00:04.459) 0:05:42.136 *********** 2025-07-06 19:49:15.229692 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:15.229884 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:15.229970 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:15.230352 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:15.231027 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:15.233460 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:15.234896 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:15.235172 | orchestrator | 2025-07-06 19:49:15.235863 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-06 19:49:15.236486 | orchestrator | Sunday 06 July 2025 19:49:15 +0000 (0:00:01.493) 0:05:43.629 *********** 2025-07-06 19:49:16.525221 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:16.525693 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:16.526927 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:16.529344 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:16.529378 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:16.530248 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:16.530526 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:16.533598 | orchestrator | 2025-07-06 19:49:16.533678 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-06 19:49:16.533693 | orchestrator | Sunday 06 July 2025 19:49:16 +0000 (0:00:01.294) 0:05:44.924 *********** 2025-07-06 19:49:16.747707 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:16.828883 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:16.902218 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:16.985321 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:17.123243 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:17.124033 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:17.125127 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:17.126133 | orchestrator | 2025-07-06 19:49:17.127484 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-06 19:49:17.128287 | orchestrator | Sunday 06 July 2025 19:49:17 +0000 (0:00:00.601) 0:05:45.526 *********** 2025-07-06 19:49:27.623449 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:27.623751 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:27.623835 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:27.623849 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:27.623860 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:27.623871 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:27.623882 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:27.623894 | orchestrator | 2025-07-06 19:49:27.623906 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-06 19:49:27.623919 | orchestrator | Sunday 06 July 2025 19:49:27 +0000 (0:00:10.492) 0:05:56.018 *********** 2025-07-06 19:49:28.553022 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:28.553931 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:28.554483 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:28.557057 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:28.557631 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:28.558745 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:28.559519 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:28.560702 | orchestrator | 2025-07-06 19:49:28.561269 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-06 19:49:28.562131 | orchestrator | Sunday 06 July 2025 19:49:28 +0000 (0:00:00.934) 0:05:56.953 *********** 2025-07-06 19:49:38.202762 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:38.203265 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:38.203367 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:38.204274 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:38.204303 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:38.204864 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:38.205125 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:38.209203 | orchestrator | 2025-07-06 19:49:38.209393 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-06 19:49:38.209699 | orchestrator | Sunday 06 July 2025 19:49:38 +0000 (0:00:09.650) 0:06:06.604 *********** 2025-07-06 19:49:49.316043 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:49.316184 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:49.316211 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:49.316227 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:49.316676 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:49.317943 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:49.318598 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:49.320014 | orchestrator | 2025-07-06 19:49:49.320560 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-06 19:49:49.322757 | orchestrator | Sunday 06 July 2025 19:49:49 +0000 (0:00:11.107) 0:06:17.711 *********** 2025-07-06 19:49:49.720306 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-06 19:49:50.515599 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-06 19:49:50.516670 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-06 19:49:50.517137 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-06 19:49:50.521235 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-06 19:49:50.521469 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-06 19:49:50.522579 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-06 19:49:50.523335 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-06 19:49:50.524620 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-06 19:49:50.526160 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-06 19:49:50.526923 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-06 19:49:50.527383 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-06 19:49:50.528062 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-06 19:49:50.528913 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-06 19:49:50.529434 | orchestrator | 2025-07-06 19:49:50.530240 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-06 19:49:50.530658 | orchestrator | Sunday 06 July 2025 19:49:50 +0000 (0:00:01.202) 0:06:18.914 *********** 2025-07-06 19:49:50.673369 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:50.740514 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:50.819192 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:50.889348 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:50.960658 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:51.078614 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:51.078931 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:51.080217 | orchestrator | 2025-07-06 19:49:51.080681 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-06 19:49:51.081493 | orchestrator | Sunday 06 July 2025 19:49:51 +0000 (0:00:00.566) 0:06:19.480 *********** 2025-07-06 19:49:54.832762 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:54.833474 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:54.834720 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:54.835729 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:54.840423 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:54.840502 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:54.840518 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:54.840529 | orchestrator | 2025-07-06 19:49:54.840542 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-06 19:49:54.841680 | orchestrator | Sunday 06 July 2025 19:49:54 +0000 (0:00:03.752) 0:06:23.233 *********** 2025-07-06 19:49:54.966511 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:55.030730 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:55.095113 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:55.166458 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:55.231678 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:55.346642 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:55.346736 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:55.347065 | orchestrator | 2025-07-06 19:49:55.347881 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-06 19:49:55.348566 | orchestrator | Sunday 06 July 2025 19:49:55 +0000 (0:00:00.514) 0:06:23.747 *********** 2025-07-06 19:49:55.434643 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-06 19:49:55.435187 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-06 19:49:55.509863 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:55.511261 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-06 19:49:55.512476 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-06 19:49:55.600218 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-06 19:49:55.600838 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-06 19:49:55.686536 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:55.687345 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-06 19:49:55.688100 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-06 19:49:55.760043 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:55.761276 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-06 19:49:55.761320 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-06 19:49:55.832291 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:55.832585 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-06 19:49:55.833486 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-06 19:49:55.952663 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:55.953472 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:55.954309 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-06 19:49:55.955333 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-06 19:49:55.958367 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:55.958419 | orchestrator | 2025-07-06 19:49:55.958433 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-06 19:49:55.958446 | orchestrator | Sunday 06 July 2025 19:49:55 +0000 (0:00:00.607) 0:06:24.355 *********** 2025-07-06 19:49:56.086999 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:56.158093 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:56.222268 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:56.288410 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:56.357558 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:56.460668 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:56.460964 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:56.461925 | orchestrator | 2025-07-06 19:49:56.462328 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-06 19:49:56.463699 | orchestrator | Sunday 06 July 2025 19:49:56 +0000 (0:00:00.505) 0:06:24.861 *********** 2025-07-06 19:49:56.595080 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:56.657060 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:56.719502 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:56.786675 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:56.850484 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:56.938534 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:56.940176 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:56.940992 | orchestrator | 2025-07-06 19:49:56.942348 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-06 19:49:56.944104 | orchestrator | Sunday 06 July 2025 19:49:56 +0000 (0:00:00.477) 0:06:25.338 *********** 2025-07-06 19:49:57.068234 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:57.129049 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:57.198671 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:57.438788 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:57.508678 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:57.635260 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:57.635400 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:57.635579 | orchestrator | 2025-07-06 19:49:57.636186 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-06 19:49:57.636629 | orchestrator | Sunday 06 July 2025 19:49:57 +0000 (0:00:00.698) 0:06:26.037 *********** 2025-07-06 19:49:59.305636 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:59.305885 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:59.308261 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:59.310094 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:59.311278 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:59.312117 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:59.313032 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:59.314260 | orchestrator | 2025-07-06 19:49:59.316289 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-06 19:49:59.317018 | orchestrator | Sunday 06 July 2025 19:49:59 +0000 (0:00:01.669) 0:06:27.706 *********** 2025-07-06 19:50:00.236031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:50:00.236467 | orchestrator | 2025-07-06 19:50:00.237956 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-06 19:50:00.238103 | orchestrator | Sunday 06 July 2025 19:50:00 +0000 (0:00:00.930) 0:06:28.636 *********** 2025-07-06 19:50:00.681220 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:01.113493 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:01.115962 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:01.119832 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:01.120603 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:01.121389 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:01.122238 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:01.122892 | orchestrator | 2025-07-06 19:50:01.123581 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-06 19:50:01.124373 | orchestrator | Sunday 06 July 2025 19:50:01 +0000 (0:00:00.876) 0:06:29.513 *********** 2025-07-06 19:50:01.607327 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:01.681425 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:02.199770 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:02.200335 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:02.201336 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:02.202389 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:02.203013 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:02.204052 | orchestrator | 2025-07-06 19:50:02.205174 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-06 19:50:02.205456 | orchestrator | Sunday 06 July 2025 19:50:02 +0000 (0:00:01.087) 0:06:30.601 *********** 2025-07-06 19:50:03.594976 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:03.596090 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:03.596996 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:03.598941 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:03.599782 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:03.601425 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:03.602850 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:03.603462 | orchestrator | 2025-07-06 19:50:03.604372 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-06 19:50:03.605147 | orchestrator | Sunday 06 July 2025 19:50:03 +0000 (0:00:01.394) 0:06:31.995 *********** 2025-07-06 19:50:03.720659 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:04.952292 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:04.953284 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:04.953903 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:04.954242 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:04.954680 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:04.955385 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:04.955997 | orchestrator | 2025-07-06 19:50:04.956582 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-06 19:50:04.957529 | orchestrator | Sunday 06 July 2025 19:50:04 +0000 (0:00:01.354) 0:06:33.350 *********** 2025-07-06 19:50:06.344033 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:06.344560 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:06.346876 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:06.347821 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:06.348656 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:06.349595 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:06.350297 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:06.350959 | orchestrator | 2025-07-06 19:50:06.351634 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-06 19:50:06.352601 | orchestrator | Sunday 06 July 2025 19:50:06 +0000 (0:00:01.393) 0:06:34.743 *********** 2025-07-06 19:50:07.724652 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:07.724757 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:07.730162 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:07.730315 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:07.730895 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:07.731837 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:07.732996 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:07.734997 | orchestrator | 2025-07-06 19:50:07.735677 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-06 19:50:07.736683 | orchestrator | Sunday 06 July 2025 19:50:07 +0000 (0:00:01.382) 0:06:36.126 *********** 2025-07-06 19:50:08.771751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:50:08.772061 | orchestrator | 2025-07-06 19:50:08.772834 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-06 19:50:08.776415 | orchestrator | Sunday 06 July 2025 19:50:08 +0000 (0:00:01.048) 0:06:37.174 *********** 2025-07-06 19:50:10.253891 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:10.255143 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:10.255196 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:10.255216 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:10.255343 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:10.256681 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:10.257027 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:10.257779 | orchestrator | 2025-07-06 19:50:10.258226 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-06 19:50:10.258625 | orchestrator | Sunday 06 July 2025 19:50:10 +0000 (0:00:01.477) 0:06:38.651 *********** 2025-07-06 19:50:11.374331 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:11.374448 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:11.374934 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:11.375235 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:11.376293 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:11.377632 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:11.378435 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:11.379556 | orchestrator | 2025-07-06 19:50:11.380241 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-06 19:50:11.381328 | orchestrator | Sunday 06 July 2025 19:50:11 +0000 (0:00:01.119) 0:06:39.771 *********** 2025-07-06 19:50:12.811974 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:12.812081 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:12.812362 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:12.814675 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:12.815402 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:12.816157 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:12.816756 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:12.819088 | orchestrator | 2025-07-06 19:50:12.819144 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-06 19:50:12.819181 | orchestrator | Sunday 06 July 2025 19:50:12 +0000 (0:00:01.438) 0:06:41.209 *********** 2025-07-06 19:50:13.991142 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:13.991235 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:13.991248 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:13.991312 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:13.991763 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:13.992431 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:13.992915 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:13.993035 | orchestrator | 2025-07-06 19:50:13.994719 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-06 19:50:13.995754 | orchestrator | Sunday 06 July 2025 19:50:13 +0000 (0:00:01.182) 0:06:42.392 *********** 2025-07-06 19:50:15.104797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:50:15.104992 | orchestrator | 2025-07-06 19:50:15.105083 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.105912 | orchestrator | Sunday 06 July 2025 19:50:14 +0000 (0:00:00.838) 0:06:43.231 *********** 2025-07-06 19:50:15.106888 | orchestrator | 2025-07-06 19:50:15.107219 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.108226 | orchestrator | Sunday 06 July 2025 19:50:14 +0000 (0:00:00.037) 0:06:43.268 *********** 2025-07-06 19:50:15.108494 | orchestrator | 2025-07-06 19:50:15.109388 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.111003 | orchestrator | Sunday 06 July 2025 19:50:14 +0000 (0:00:00.036) 0:06:43.305 *********** 2025-07-06 19:50:15.111109 | orchestrator | 2025-07-06 19:50:15.111403 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.112200 | orchestrator | Sunday 06 July 2025 19:50:14 +0000 (0:00:00.042) 0:06:43.347 *********** 2025-07-06 19:50:15.112708 | orchestrator | 2025-07-06 19:50:15.113211 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.114266 | orchestrator | Sunday 06 July 2025 19:50:14 +0000 (0:00:00.037) 0:06:43.384 *********** 2025-07-06 19:50:15.115108 | orchestrator | 2025-07-06 19:50:15.115139 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.115222 | orchestrator | Sunday 06 July 2025 19:50:15 +0000 (0:00:00.037) 0:06:43.422 *********** 2025-07-06 19:50:15.115897 | orchestrator | 2025-07-06 19:50:15.116299 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:50:15.116940 | orchestrator | Sunday 06 July 2025 19:50:15 +0000 (0:00:00.044) 0:06:43.466 *********** 2025-07-06 19:50:15.117280 | orchestrator | 2025-07-06 19:50:15.117727 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-06 19:50:15.118294 | orchestrator | Sunday 06 July 2025 19:50:15 +0000 (0:00:00.038) 0:06:43.504 *********** 2025-07-06 19:50:16.481626 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:16.482500 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:16.483439 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:16.485270 | orchestrator | 2025-07-06 19:50:16.487118 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-06 19:50:16.488946 | orchestrator | Sunday 06 July 2025 19:50:16 +0000 (0:00:01.376) 0:06:44.881 *********** 2025-07-06 19:50:18.365387 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:18.366129 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:18.366681 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:18.367477 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:18.368558 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:18.370472 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:18.371233 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:18.372199 | orchestrator | 2025-07-06 19:50:18.373074 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-06 19:50:18.373956 | orchestrator | Sunday 06 July 2025 19:50:18 +0000 (0:00:01.883) 0:06:46.765 *********** 2025-07-06 19:50:19.453234 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:19.457141 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:19.457259 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:19.459534 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:19.460321 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:19.460754 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:19.461741 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:19.461770 | orchestrator | 2025-07-06 19:50:19.462261 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-06 19:50:19.463340 | orchestrator | Sunday 06 July 2025 19:50:19 +0000 (0:00:01.087) 0:06:47.852 *********** 2025-07-06 19:50:19.590776 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:21.559332 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:21.560413 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:21.561780 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:21.562925 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:21.564397 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:21.564909 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:21.565491 | orchestrator | 2025-07-06 19:50:21.566106 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-06 19:50:21.566896 | orchestrator | Sunday 06 July 2025 19:50:21 +0000 (0:00:02.105) 0:06:49.957 *********** 2025-07-06 19:50:21.665556 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:21.665731 | orchestrator | 2025-07-06 19:50:21.666748 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-06 19:50:21.667942 | orchestrator | Sunday 06 July 2025 19:50:21 +0000 (0:00:00.110) 0:06:50.068 *********** 2025-07-06 19:50:22.717130 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:22.717489 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:22.718617 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:22.720160 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:22.720675 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:22.720799 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:22.721530 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:22.722365 | orchestrator | 2025-07-06 19:50:22.723347 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-06 19:50:22.723519 | orchestrator | Sunday 06 July 2025 19:50:22 +0000 (0:00:01.048) 0:06:51.116 *********** 2025-07-06 19:50:23.043034 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:23.108676 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:23.170512 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:23.240771 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:23.304780 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:23.423114 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:23.423320 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:23.424955 | orchestrator | 2025-07-06 19:50:23.426010 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-06 19:50:23.430121 | orchestrator | Sunday 06 July 2025 19:50:23 +0000 (0:00:00.708) 0:06:51.824 *********** 2025-07-06 19:50:24.300780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:50:24.301113 | orchestrator | 2025-07-06 19:50:24.301212 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-06 19:50:24.302081 | orchestrator | Sunday 06 July 2025 19:50:24 +0000 (0:00:00.877) 0:06:52.702 *********** 2025-07-06 19:50:24.725991 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:25.137767 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:25.138006 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:25.139624 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:25.141008 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:25.142103 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:25.142784 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:25.143846 | orchestrator | 2025-07-06 19:50:25.144937 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-06 19:50:25.145700 | orchestrator | Sunday 06 July 2025 19:50:25 +0000 (0:00:00.836) 0:06:53.539 *********** 2025-07-06 19:50:27.849396 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-06 19:50:27.850093 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-06 19:50:27.853508 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-06 19:50:27.853545 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-06 19:50:27.853556 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-06 19:50:27.853658 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-06 19:50:27.854653 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-06 19:50:27.855791 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-06 19:50:27.856089 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-06 19:50:27.856983 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-06 19:50:27.857721 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-06 19:50:27.858212 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-06 19:50:27.861378 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-06 19:50:27.861456 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-06 19:50:27.861472 | orchestrator | 2025-07-06 19:50:27.861485 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-06 19:50:27.861524 | orchestrator | Sunday 06 July 2025 19:50:27 +0000 (0:00:02.709) 0:06:56.249 *********** 2025-07-06 19:50:27.988481 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:28.050812 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:28.121442 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:28.199613 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:28.262258 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:28.366240 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:28.366445 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:28.367563 | orchestrator | 2025-07-06 19:50:28.369997 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-06 19:50:28.370059 | orchestrator | Sunday 06 July 2025 19:50:28 +0000 (0:00:00.519) 0:06:56.768 *********** 2025-07-06 19:50:29.163553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:50:29.164720 | orchestrator | 2025-07-06 19:50:29.167231 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-06 19:50:29.167305 | orchestrator | Sunday 06 July 2025 19:50:29 +0000 (0:00:00.794) 0:06:57.562 *********** 2025-07-06 19:50:29.729806 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:29.810282 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:30.244690 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:30.246570 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:30.247083 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:30.249206 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:30.250788 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:30.252098 | orchestrator | 2025-07-06 19:50:30.253077 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-06 19:50:30.254210 | orchestrator | Sunday 06 July 2025 19:50:30 +0000 (0:00:01.076) 0:06:58.639 *********** 2025-07-06 19:50:30.679600 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:31.069519 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:31.069759 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:31.070258 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:31.073959 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:31.073991 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:31.074004 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:31.074058 | orchestrator | 2025-07-06 19:50:31.074076 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-06 19:50:31.074089 | orchestrator | Sunday 06 July 2025 19:50:31 +0000 (0:00:00.829) 0:06:59.469 *********** 2025-07-06 19:50:31.220143 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:31.283316 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:31.349529 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:31.418922 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:31.483387 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:31.576928 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:31.577742 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:31.579812 | orchestrator | 2025-07-06 19:50:31.580288 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-06 19:50:31.581340 | orchestrator | Sunday 06 July 2025 19:50:31 +0000 (0:00:00.507) 0:06:59.977 *********** 2025-07-06 19:50:33.019776 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:33.020716 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:33.021555 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:33.021587 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:33.025451 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:33.025589 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:33.029120 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:33.030314 | orchestrator | 2025-07-06 19:50:33.030524 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-06 19:50:33.031388 | orchestrator | Sunday 06 July 2025 19:50:33 +0000 (0:00:01.443) 0:07:01.420 *********** 2025-07-06 19:50:33.147562 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:33.218292 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:33.281394 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:33.359686 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:33.429662 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:33.523560 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:33.526930 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:33.529801 | orchestrator | 2025-07-06 19:50:33.530652 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-06 19:50:33.531429 | orchestrator | Sunday 06 July 2025 19:50:33 +0000 (0:00:00.501) 0:07:01.922 *********** 2025-07-06 19:50:41.279032 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:41.280097 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:41.280934 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:41.282973 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:41.283473 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:41.284462 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:41.285319 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:41.286211 | orchestrator | 2025-07-06 19:50:41.287173 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-06 19:50:41.287884 | orchestrator | Sunday 06 July 2025 19:50:41 +0000 (0:00:07.754) 0:07:09.677 *********** 2025-07-06 19:50:42.671296 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:42.671399 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.672152 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.672533 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.673819 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.674663 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.675441 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.676255 | orchestrator | 2025-07-06 19:50:42.677143 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-06 19:50:42.677759 | orchestrator | Sunday 06 July 2025 19:50:42 +0000 (0:00:01.393) 0:07:11.071 *********** 2025-07-06 19:50:44.450587 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:44.450752 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:44.451096 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:44.451662 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:44.451985 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:44.452455 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:44.453103 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:44.454137 | orchestrator | 2025-07-06 19:50:44.454818 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-06 19:50:44.455162 | orchestrator | Sunday 06 July 2025 19:50:44 +0000 (0:00:01.781) 0:07:12.852 *********** 2025-07-06 19:50:46.037285 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:46.038626 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:46.040212 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:46.041415 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:46.042929 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:46.044011 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:46.045777 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:46.046942 | orchestrator | 2025-07-06 19:50:46.047641 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 19:50:46.048529 | orchestrator | Sunday 06 July 2025 19:50:46 +0000 (0:00:01.583) 0:07:14.436 *********** 2025-07-06 19:50:46.440660 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:47.076565 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:47.077314 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:47.079040 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:47.080673 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:47.081443 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:47.082657 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:47.083098 | orchestrator | 2025-07-06 19:50:47.084138 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 19:50:47.085167 | orchestrator | Sunday 06 July 2025 19:50:47 +0000 (0:00:01.040) 0:07:15.476 *********** 2025-07-06 19:50:47.213366 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:47.289921 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:47.365820 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:47.430172 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:47.518272 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:47.908007 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:47.908922 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:47.909748 | orchestrator | 2025-07-06 19:50:47.911696 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-06 19:50:47.913096 | orchestrator | Sunday 06 July 2025 19:50:47 +0000 (0:00:00.830) 0:07:16.307 *********** 2025-07-06 19:50:48.039969 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:48.100647 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:48.172245 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:48.235694 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:48.299963 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:48.399713 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:48.400623 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:48.402978 | orchestrator | 2025-07-06 19:50:48.403059 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-06 19:50:48.403688 | orchestrator | Sunday 06 July 2025 19:50:48 +0000 (0:00:00.493) 0:07:16.800 *********** 2025-07-06 19:50:48.524968 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:48.597343 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:48.657322 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:48.720024 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:48.954413 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:49.058543 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:49.060705 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:49.060789 | orchestrator | 2025-07-06 19:50:49.061673 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-06 19:50:49.062610 | orchestrator | Sunday 06 July 2025 19:50:49 +0000 (0:00:00.658) 0:07:17.459 *********** 2025-07-06 19:50:49.194352 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:49.257511 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:49.319207 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:49.389818 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:49.451984 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:49.550975 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:49.551259 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:49.552764 | orchestrator | 2025-07-06 19:50:49.553897 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-06 19:50:49.554834 | orchestrator | Sunday 06 July 2025 19:50:49 +0000 (0:00:00.491) 0:07:17.950 *********** 2025-07-06 19:50:49.682229 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:49.742351 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:49.810650 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:49.875255 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:49.940195 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:50.056423 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:50.056543 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:50.057651 | orchestrator | 2025-07-06 19:50:50.058347 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-06 19:50:50.059024 | orchestrator | Sunday 06 July 2025 19:50:50 +0000 (0:00:00.508) 0:07:18.458 *********** 2025-07-06 19:50:55.638134 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:55.638246 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:55.638561 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:55.640241 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:55.640343 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:55.640777 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:55.644304 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:55.644685 | orchestrator | 2025-07-06 19:50:55.645567 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-06 19:50:55.647058 | orchestrator | Sunday 06 July 2025 19:50:55 +0000 (0:00:05.579) 0:07:24.038 *********** 2025-07-06 19:50:55.772609 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:55.842137 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:55.904314 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:56.020670 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:56.087690 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:56.218205 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:56.218298 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:56.218895 | orchestrator | 2025-07-06 19:50:56.220625 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-06 19:50:56.220664 | orchestrator | Sunday 06 July 2025 19:50:56 +0000 (0:00:00.577) 0:07:24.616 *********** 2025-07-06 19:50:57.355676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:50:57.356655 | orchestrator | 2025-07-06 19:50:57.357454 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-06 19:50:57.357600 | orchestrator | Sunday 06 July 2025 19:50:57 +0000 (0:00:01.139) 0:07:25.755 *********** 2025-07-06 19:50:59.196607 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:59.196713 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:59.198904 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:59.199092 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:59.200677 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:59.203175 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:59.204813 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:59.206164 | orchestrator | 2025-07-06 19:50:59.206942 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-06 19:50:59.207554 | orchestrator | Sunday 06 July 2025 19:50:59 +0000 (0:00:01.839) 0:07:27.595 *********** 2025-07-06 19:51:00.295998 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:00.296517 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:00.297608 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:00.301335 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:00.301684 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:00.303183 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:00.304148 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:00.305016 | orchestrator | 2025-07-06 19:51:00.305789 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-06 19:51:00.306556 | orchestrator | Sunday 06 July 2025 19:51:00 +0000 (0:00:01.102) 0:07:28.698 *********** 2025-07-06 19:51:01.379348 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:01.381173 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:01.382490 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:01.384161 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:01.385469 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:01.386183 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:01.386842 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:01.388039 | orchestrator | 2025-07-06 19:51:01.389125 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-06 19:51:01.389473 | orchestrator | Sunday 06 July 2025 19:51:01 +0000 (0:00:01.074) 0:07:29.773 *********** 2025-07-06 19:51:03.087284 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.087953 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.088766 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.089388 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.090361 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.091305 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.092331 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:51:03.092949 | orchestrator | 2025-07-06 19:51:03.093744 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-06 19:51:03.094380 | orchestrator | Sunday 06 July 2025 19:51:03 +0000 (0:00:01.712) 0:07:31.485 *********** 2025-07-06 19:51:03.986456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:51:03.987585 | orchestrator | 2025-07-06 19:51:03.988150 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-06 19:51:03.989681 | orchestrator | Sunday 06 July 2025 19:51:03 +0000 (0:00:00.899) 0:07:32.384 *********** 2025-07-06 19:51:12.887253 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:12.887613 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:12.888567 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:12.889491 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:12.889604 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:12.890521 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:12.894752 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:12.895578 | orchestrator | 2025-07-06 19:51:12.897060 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-06 19:51:12.897394 | orchestrator | Sunday 06 July 2025 19:51:12 +0000 (0:00:08.899) 0:07:41.284 *********** 2025-07-06 19:51:14.622694 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:14.623576 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:14.625251 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:14.625379 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:14.627507 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:14.628695 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:14.629905 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:14.630585 | orchestrator | 2025-07-06 19:51:14.631515 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-06 19:51:14.632106 | orchestrator | Sunday 06 July 2025 19:51:14 +0000 (0:00:01.737) 0:07:43.021 *********** 2025-07-06 19:51:15.885790 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:15.885958 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:15.886411 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:15.887108 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:15.887312 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:15.888101 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:15.889367 | orchestrator | 2025-07-06 19:51:15.889551 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-06 19:51:15.890093 | orchestrator | Sunday 06 July 2025 19:51:15 +0000 (0:00:01.262) 0:07:44.283 *********** 2025-07-06 19:51:17.332195 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:17.332426 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:17.333255 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:17.338271 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:17.338514 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:17.339648 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:17.340353 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:17.340993 | orchestrator | 2025-07-06 19:51:17.341636 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-06 19:51:17.342249 | orchestrator | 2025-07-06 19:51:17.343450 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-06 19:51:17.344053 | orchestrator | Sunday 06 July 2025 19:51:17 +0000 (0:00:01.450) 0:07:45.734 *********** 2025-07-06 19:51:17.460525 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:51:17.524009 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:17.583018 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:17.649132 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:17.712994 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:17.833533 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:17.834490 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:17.835220 | orchestrator | 2025-07-06 19:51:17.836184 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-06 19:51:17.839709 | orchestrator | 2025-07-06 19:51:17.840486 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-06 19:51:17.841204 | orchestrator | Sunday 06 July 2025 19:51:17 +0000 (0:00:00.502) 0:07:46.237 *********** 2025-07-06 19:51:19.165751 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:19.165853 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:19.165923 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:19.165935 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:19.165945 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:19.165954 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:19.165963 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:19.165972 | orchestrator | 2025-07-06 19:51:19.165982 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-06 19:51:19.165992 | orchestrator | Sunday 06 July 2025 19:51:19 +0000 (0:00:01.322) 0:07:47.559 *********** 2025-07-06 19:51:20.631101 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:20.631230 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:20.631258 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:20.632245 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:20.634117 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:20.635097 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:20.635727 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:20.636089 | orchestrator | 2025-07-06 19:51:20.636601 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-06 19:51:20.637302 | orchestrator | Sunday 06 July 2025 19:51:20 +0000 (0:00:01.469) 0:07:49.029 *********** 2025-07-06 19:51:20.949106 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:51:21.010664 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:21.079645 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:21.142980 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:21.202406 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:21.606399 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:21.606822 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:21.607536 | orchestrator | 2025-07-06 19:51:21.608176 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-06 19:51:21.608610 | orchestrator | Sunday 06 July 2025 19:51:21 +0000 (0:00:00.979) 0:07:50.009 *********** 2025-07-06 19:51:22.883051 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:22.883272 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:22.883295 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:22.883518 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:22.884121 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:22.884813 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:22.886171 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:22.887469 | orchestrator | 2025-07-06 19:51:22.887523 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-06 19:51:22.887587 | orchestrator | 2025-07-06 19:51:22.888031 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-06 19:51:22.888441 | orchestrator | Sunday 06 July 2025 19:51:22 +0000 (0:00:01.274) 0:07:51.283 *********** 2025-07-06 19:51:23.843121 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:51:23.843463 | orchestrator | 2025-07-06 19:51:23.844135 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-06 19:51:23.844852 | orchestrator | Sunday 06 July 2025 19:51:23 +0000 (0:00:00.961) 0:07:52.245 *********** 2025-07-06 19:51:24.253412 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:24.699504 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:24.699590 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:24.700241 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:24.700932 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:24.701054 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:24.701340 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:24.702119 | orchestrator | 2025-07-06 19:51:24.702219 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-06 19:51:24.703203 | orchestrator | Sunday 06 July 2025 19:51:24 +0000 (0:00:00.857) 0:07:53.102 *********** 2025-07-06 19:51:25.784287 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:25.784495 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:25.785076 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:25.785608 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:25.786011 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:25.786711 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:25.787423 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:25.788256 | orchestrator | 2025-07-06 19:51:25.790832 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-06 19:51:25.791185 | orchestrator | Sunday 06 July 2025 19:51:25 +0000 (0:00:01.083) 0:07:54.185 *********** 2025-07-06 19:51:26.823572 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 19:51:26.824600 | orchestrator | 2025-07-06 19:51:26.825710 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-06 19:51:26.828317 | orchestrator | Sunday 06 July 2025 19:51:26 +0000 (0:00:01.038) 0:07:55.224 *********** 2025-07-06 19:51:27.228257 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:27.650607 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:27.651202 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:27.651688 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:27.652721 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:27.653422 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:27.654296 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:27.655104 | orchestrator | 2025-07-06 19:51:27.655531 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-06 19:51:27.656136 | orchestrator | Sunday 06 July 2025 19:51:27 +0000 (0:00:00.827) 0:07:56.051 *********** 2025-07-06 19:51:28.058712 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:28.751138 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:28.752246 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:28.752949 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:28.753988 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:28.755607 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:28.756580 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:28.757418 | orchestrator | 2025-07-06 19:51:28.758624 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:51:28.759540 | orchestrator | 2025-07-06 19:51:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:51:28.759568 | orchestrator | 2025-07-06 19:51:28 | INFO  | Please wait and do not abort execution. 2025-07-06 19:51:28.760110 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-06 19:51:28.761310 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:51:28.762075 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:51:28.762234 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:51:28.763062 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-06 19:51:28.763860 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:51:28.764363 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:51:28.764664 | orchestrator | 2025-07-06 19:51:28.766005 | orchestrator | 2025-07-06 19:51:28.766469 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:51:28.767056 | orchestrator | Sunday 06 July 2025 19:51:28 +0000 (0:00:01.101) 0:07:57.153 *********** 2025-07-06 19:51:28.767508 | orchestrator | =============================================================================== 2025-07-06 19:51:28.767948 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.29s 2025-07-06 19:51:28.768565 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.88s 2025-07-06 19:51:28.768908 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.73s 2025-07-06 19:51:28.769328 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.15s 2025-07-06 19:51:28.769955 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.38s 2025-07-06 19:51:28.770447 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.45s 2025-07-06 19:51:28.770627 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.11s 2025-07-06 19:51:28.771112 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.49s 2025-07-06 19:51:28.771560 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.65s 2025-07-06 19:51:28.771787 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.90s 2025-07-06 19:51:28.772417 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.79s 2025-07-06 19:51:28.772692 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.24s 2025-07-06 19:51:28.773104 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.06s 2025-07-06 19:51:28.773497 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.04s 2025-07-06 19:51:28.773826 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.75s 2025-07-06 19:51:28.774230 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.71s 2025-07-06 19:51:28.774592 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.06s 2025-07-06 19:51:28.775189 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.04s 2025-07-06 19:51:28.775404 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.98s 2025-07-06 19:51:28.775700 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.71s 2025-07-06 19:51:29.443103 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-06 19:51:29.443208 | orchestrator | + osism apply network 2025-07-06 19:51:31.567464 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:51:31.567567 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:51:31.567582 | orchestrator | Registering Redlock._release_script 2025-07-06 19:51:31.632375 | orchestrator | 2025-07-06 19:51:31 | INFO  | Task 1d269909-30b2-42af-a6e4-6ce09075e7c8 (network) was prepared for execution. 2025-07-06 19:51:31.632514 | orchestrator | 2025-07-06 19:51:31 | INFO  | It takes a moment until task 1d269909-30b2-42af-a6e4-6ce09075e7c8 (network) has been started and output is visible here. 2025-07-06 19:51:35.791557 | orchestrator | 2025-07-06 19:51:35.792462 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-06 19:51:35.795607 | orchestrator | 2025-07-06 19:51:35.795661 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-06 19:51:35.795871 | orchestrator | Sunday 06 July 2025 19:51:35 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-07-06 19:51:35.938155 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:36.013130 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:36.090312 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:36.166777 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:36.348094 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:36.487134 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:36.487528 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:36.488326 | orchestrator | 2025-07-06 19:51:36.489082 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-06 19:51:36.493158 | orchestrator | Sunday 06 July 2025 19:51:36 +0000 (0:00:00.694) 0:00:00.967 *********** 2025-07-06 19:51:37.683235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:51:37.683495 | orchestrator | 2025-07-06 19:51:37.684238 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-06 19:51:37.684912 | orchestrator | Sunday 06 July 2025 19:51:37 +0000 (0:00:01.196) 0:00:02.164 *********** 2025-07-06 19:51:39.773044 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:39.773167 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:39.773293 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:39.773317 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:39.774784 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:39.775408 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:39.776416 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:39.777687 | orchestrator | 2025-07-06 19:51:39.778105 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-06 19:51:39.778908 | orchestrator | Sunday 06 July 2025 19:51:39 +0000 (0:00:02.090) 0:00:04.254 *********** 2025-07-06 19:51:41.734756 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:41.735150 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:41.736504 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:41.737701 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:41.738796 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:41.741328 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:41.742441 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:41.743336 | orchestrator | 2025-07-06 19:51:41.744170 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-06 19:51:41.744720 | orchestrator | Sunday 06 July 2025 19:51:41 +0000 (0:00:01.959) 0:00:06.214 *********** 2025-07-06 19:51:42.249578 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-06 19:51:42.740210 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-06 19:51:42.740574 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-06 19:51:42.741601 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-06 19:51:42.742792 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-06 19:51:42.744181 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-06 19:51:42.744207 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-06 19:51:42.744958 | orchestrator | 2025-07-06 19:51:42.745729 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-06 19:51:42.746787 | orchestrator | Sunday 06 July 2025 19:51:42 +0000 (0:00:01.008) 0:00:07.223 *********** 2025-07-06 19:51:45.931313 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 19:51:45.932389 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 19:51:45.933050 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 19:51:45.937579 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 19:51:45.938001 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 19:51:45.938669 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 19:51:45.939678 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 19:51:45.940494 | orchestrator | 2025-07-06 19:51:45.941363 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-06 19:51:45.941629 | orchestrator | Sunday 06 July 2025 19:51:45 +0000 (0:00:03.187) 0:00:10.411 *********** 2025-07-06 19:51:47.371056 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:47.375616 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:47.375671 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:47.376325 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:47.377456 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:47.378410 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:47.381637 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:47.381951 | orchestrator | 2025-07-06 19:51:47.383536 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-06 19:51:47.384600 | orchestrator | Sunday 06 July 2025 19:51:47 +0000 (0:00:01.441) 0:00:11.852 *********** 2025-07-06 19:51:49.244867 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 19:51:49.246945 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 19:51:49.248505 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 19:51:49.250828 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 19:51:49.252656 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 19:51:49.253305 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 19:51:49.254349 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 19:51:49.255474 | orchestrator | 2025-07-06 19:51:49.256145 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-06 19:51:49.257595 | orchestrator | Sunday 06 July 2025 19:51:49 +0000 (0:00:01.874) 0:00:13.727 *********** 2025-07-06 19:51:49.656682 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:49.923625 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:50.355515 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:50.356532 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:50.358547 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:50.360183 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:50.360621 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:50.362097 | orchestrator | 2025-07-06 19:51:50.363376 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-06 19:51:50.363714 | orchestrator | Sunday 06 July 2025 19:51:50 +0000 (0:00:01.106) 0:00:14.834 *********** 2025-07-06 19:51:50.519235 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:51:50.602878 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:50.689568 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:50.773447 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:50.857217 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:51.000957 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:51.003033 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:51.004416 | orchestrator | 2025-07-06 19:51:51.006085 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-06 19:51:51.007078 | orchestrator | Sunday 06 July 2025 19:51:50 +0000 (0:00:00.649) 0:00:15.483 *********** 2025-07-06 19:51:53.056360 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:53.056583 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:53.058406 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:53.060100 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:53.063226 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:53.065312 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:53.066159 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:53.067245 | orchestrator | 2025-07-06 19:51:53.068022 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-06 19:51:53.068850 | orchestrator | Sunday 06 July 2025 19:51:53 +0000 (0:00:02.051) 0:00:17.535 *********** 2025-07-06 19:51:53.307556 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:53.396162 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:53.477586 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:53.557500 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:53.900145 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:53.900350 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:53.901399 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-06 19:51:53.902197 | orchestrator | 2025-07-06 19:51:53.902715 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-06 19:51:53.903388 | orchestrator | Sunday 06 July 2025 19:51:53 +0000 (0:00:00.849) 0:00:18.384 *********** 2025-07-06 19:51:55.551801 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:55.551985 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:55.552003 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:55.552084 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:55.553365 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:55.554356 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:55.555593 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:55.556551 | orchestrator | 2025-07-06 19:51:55.557414 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-06 19:51:55.558003 | orchestrator | Sunday 06 July 2025 19:51:55 +0000 (0:00:01.642) 0:00:20.026 *********** 2025-07-06 19:51:56.816090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:51:56.819689 | orchestrator | 2025-07-06 19:51:56.819780 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-06 19:51:56.819795 | orchestrator | Sunday 06 July 2025 19:51:56 +0000 (0:00:01.267) 0:00:21.294 *********** 2025-07-06 19:51:57.389681 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:57.825555 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:57.825644 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:57.826109 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:57.826527 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:57.826704 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:57.827562 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:57.827653 | orchestrator | 2025-07-06 19:51:57.827957 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-06 19:51:57.828189 | orchestrator | Sunday 06 July 2025 19:51:57 +0000 (0:00:01.013) 0:00:22.308 *********** 2025-07-06 19:51:58.149567 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:58.233390 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:51:58.315369 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:51:58.397058 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:51:58.476182 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:51:58.619190 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:51:58.620487 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:51:58.621004 | orchestrator | 2025-07-06 19:51:58.622197 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-06 19:51:58.622225 | orchestrator | Sunday 06 July 2025 19:51:58 +0000 (0:00:00.794) 0:00:23.103 *********** 2025-07-06 19:51:59.039572 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.039678 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.350528 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.350698 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.351003 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.351939 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.352675 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.353017 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.354483 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.354580 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.840737 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.840812 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.840818 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:51:59.841854 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:51:59.843666 | orchestrator | 2025-07-06 19:51:59.845100 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-06 19:51:59.846046 | orchestrator | Sunday 06 July 2025 19:51:59 +0000 (0:00:01.209) 0:00:24.313 *********** 2025-07-06 19:52:00.022341 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:00.107036 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:00.186432 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:00.263770 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:00.344593 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:00.479308 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:00.479703 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:00.480876 | orchestrator | 2025-07-06 19:52:00.482221 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-06 19:52:00.483035 | orchestrator | Sunday 06 July 2025 19:52:00 +0000 (0:00:00.649) 0:00:24.962 *********** 2025-07-06 19:52:05.049746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-4, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-5 2025-07-06 19:52:05.052466 | orchestrator | 2025-07-06 19:52:05.054426 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-06 19:52:05.055301 | orchestrator | Sunday 06 July 2025 19:52:05 +0000 (0:00:04.565) 0:00:29.528 *********** 2025-07-06 19:52:10.831285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.831431 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.832174 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.832737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.833572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.834845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.835043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.837239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.837414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:10.840175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.840210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.840223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.840253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.840266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:10.840278 | orchestrator | 2025-07-06 19:52:10.840737 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-06 19:52:10.841006 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:05.782) 0:00:35.310 *********** 2025-07-06 19:52:16.189895 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.190879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.192209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.193710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.194884 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.195500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.195987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.196581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:52:16.197324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.198110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.198690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.199395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.200121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.200285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:52:16.200836 | orchestrator | 2025-07-06 19:52:16.201519 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-06 19:52:16.201756 | orchestrator | Sunday 06 July 2025 19:52:16 +0000 (0:00:05.363) 0:00:40.673 *********** 2025-07-06 19:52:17.483349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:52:17.483479 | orchestrator | 2025-07-06 19:52:17.486521 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-06 19:52:17.486625 | orchestrator | Sunday 06 July 2025 19:52:17 +0000 (0:00:01.285) 0:00:41.958 *********** 2025-07-06 19:52:17.961606 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:18.250252 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:18.660347 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:18.660749 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:18.664378 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:18.664414 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:18.664426 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:18.664657 | orchestrator | 2025-07-06 19:52:18.665657 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-06 19:52:18.666401 | orchestrator | Sunday 06 July 2025 19:52:18 +0000 (0:00:01.183) 0:00:43.142 *********** 2025-07-06 19:52:18.765154 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:18.765503 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:18.766545 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:18.769242 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:18.855466 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:18.855572 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:18.856831 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:18.857520 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:18.858171 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:18.968484 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:18.968574 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:18.968587 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:18.968796 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:18.969744 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:19.069374 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:19.069559 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:19.069578 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:19.070417 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:19.070446 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:19.166262 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:19.166670 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:19.170397 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:19.170452 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:19.170461 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:19.424119 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:19.424499 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:19.425522 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:19.427068 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:19.431376 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:20.699598 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:20.700088 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:52:20.700827 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:52:20.702074 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:52:20.703430 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:52:20.704413 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:20.704518 | orchestrator | 2025-07-06 19:52:20.704989 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-06 19:52:20.705327 | orchestrator | Sunday 06 July 2025 19:52:20 +0000 (0:00:02.037) 0:00:45.179 *********** 2025-07-06 19:52:20.866335 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:20.945561 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:21.031559 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:21.118865 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:21.202210 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:21.323605 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:21.323808 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:21.324494 | orchestrator | 2025-07-06 19:52:21.325187 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-06 19:52:21.325882 | orchestrator | Sunday 06 July 2025 19:52:21 +0000 (0:00:00.627) 0:00:45.807 *********** 2025-07-06 19:52:21.484574 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:21.566801 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:21.820407 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:21.904128 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:21.987764 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:22.029878 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:22.030381 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:22.031356 | orchestrator | 2025-07-06 19:52:22.032569 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:52:22.032984 | orchestrator | 2025-07-06 19:52:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:52:22.033006 | orchestrator | 2025-07-06 19:52:22 | INFO  | Please wait and do not abort execution. 2025-07-06 19:52:22.034585 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:52:22.035450 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:52:22.036264 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:52:22.036974 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:52:22.037668 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:52:22.038314 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:52:22.039086 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:52:22.039536 | orchestrator | 2025-07-06 19:52:22.040096 | orchestrator | 2025-07-06 19:52:22.040602 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:52:22.041360 | orchestrator | Sunday 06 July 2025 19:52:22 +0000 (0:00:00.706) 0:00:46.513 *********** 2025-07-06 19:52:22.041766 | orchestrator | =============================================================================== 2025-07-06 19:52:22.042248 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.78s 2025-07-06 19:52:22.042744 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.36s 2025-07-06 19:52:22.043758 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.57s 2025-07-06 19:52:22.043978 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.19s 2025-07-06 19:52:22.045356 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2025-07-06 19:52:22.045544 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.05s 2025-07-06 19:52:22.045736 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.04s 2025-07-06 19:52:22.046621 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.96s 2025-07-06 19:52:22.047010 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2025-07-06 19:52:22.047328 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2025-07-06 19:52:22.048078 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-07-06 19:52:22.048339 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2025-07-06 19:52:22.048753 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.27s 2025-07-06 19:52:22.049528 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2025-07-06 19:52:22.049741 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2025-07-06 19:52:22.050218 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2025-07-06 19:52:22.050823 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-07-06 19:52:22.051035 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2025-07-06 19:52:22.051640 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2025-07-06 19:52:22.051925 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.85s 2025-07-06 19:52:22.609099 | orchestrator | + osism apply wireguard 2025-07-06 19:52:24.285448 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:52:24.285569 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:52:24.285584 | orchestrator | Registering Redlock._release_script 2025-07-06 19:52:24.345760 | orchestrator | 2025-07-06 19:52:24 | INFO  | Task 61b8f89b-1197-4494-b7d0-92b4813f049b (wireguard) was prepared for execution. 2025-07-06 19:52:24.345857 | orchestrator | 2025-07-06 19:52:24 | INFO  | It takes a moment until task 61b8f89b-1197-4494-b7d0-92b4813f049b (wireguard) has been started and output is visible here. 2025-07-06 19:52:28.285789 | orchestrator | 2025-07-06 19:52:28.285897 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-06 19:52:28.288052 | orchestrator | 2025-07-06 19:52:28.288227 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-06 19:52:28.289212 | orchestrator | Sunday 06 July 2025 19:52:28 +0000 (0:00:00.220) 0:00:00.220 *********** 2025-07-06 19:52:29.756085 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:29.757300 | orchestrator | 2025-07-06 19:52:29.757508 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-06 19:52:29.758663 | orchestrator | Sunday 06 July 2025 19:52:29 +0000 (0:00:01.474) 0:00:01.695 *********** 2025-07-06 19:52:35.977194 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:35.977375 | orchestrator | 2025-07-06 19:52:35.978160 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-06 19:52:35.980035 | orchestrator | Sunday 06 July 2025 19:52:35 +0000 (0:00:06.221) 0:00:07.916 *********** 2025-07-06 19:52:36.507474 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:36.509243 | orchestrator | 2025-07-06 19:52:36.510473 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-06 19:52:36.511580 | orchestrator | Sunday 06 July 2025 19:52:36 +0000 (0:00:00.532) 0:00:08.448 *********** 2025-07-06 19:52:36.932979 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:36.933088 | orchestrator | 2025-07-06 19:52:36.933171 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-06 19:52:36.934111 | orchestrator | Sunday 06 July 2025 19:52:36 +0000 (0:00:00.425) 0:00:08.874 *********** 2025-07-06 19:52:37.439847 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:37.441758 | orchestrator | 2025-07-06 19:52:37.441816 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-06 19:52:37.441830 | orchestrator | Sunday 06 July 2025 19:52:37 +0000 (0:00:00.505) 0:00:09.379 *********** 2025-07-06 19:52:37.961380 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:37.962136 | orchestrator | 2025-07-06 19:52:37.962176 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-06 19:52:37.962601 | orchestrator | Sunday 06 July 2025 19:52:37 +0000 (0:00:00.523) 0:00:09.903 *********** 2025-07-06 19:52:38.337582 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:38.338070 | orchestrator | 2025-07-06 19:52:38.338731 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-06 19:52:38.339563 | orchestrator | Sunday 06 July 2025 19:52:38 +0000 (0:00:00.373) 0:00:10.276 *********** 2025-07-06 19:52:39.439646 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:39.440052 | orchestrator | 2025-07-06 19:52:39.440551 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-06 19:52:39.442726 | orchestrator | Sunday 06 July 2025 19:52:39 +0000 (0:00:01.103) 0:00:11.380 *********** 2025-07-06 19:52:40.353815 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:52:40.355269 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:40.355312 | orchestrator | 2025-07-06 19:52:40.356041 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-06 19:52:40.356760 | orchestrator | Sunday 06 July 2025 19:52:40 +0000 (0:00:00.913) 0:00:12.293 *********** 2025-07-06 19:52:41.953302 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:41.953852 | orchestrator | 2025-07-06 19:52:41.955167 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-06 19:52:41.955435 | orchestrator | Sunday 06 July 2025 19:52:41 +0000 (0:00:01.598) 0:00:13.892 *********** 2025-07-06 19:52:42.922008 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:42.923264 | orchestrator | 2025-07-06 19:52:42.924486 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:52:42.924547 | orchestrator | 2025-07-06 19:52:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:52:42.924564 | orchestrator | 2025-07-06 19:52:42 | INFO  | Please wait and do not abort execution. 2025-07-06 19:52:42.925756 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:52:42.925797 | orchestrator | 2025-07-06 19:52:42.926402 | orchestrator | 2025-07-06 19:52:42.927198 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:52:42.927669 | orchestrator | Sunday 06 July 2025 19:52:42 +0000 (0:00:00.967) 0:00:14.859 *********** 2025-07-06 19:52:42.928449 | orchestrator | =============================================================================== 2025-07-06 19:52:42.929799 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.22s 2025-07-06 19:52:42.930414 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.60s 2025-07-06 19:52:42.931446 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.47s 2025-07-06 19:52:42.932241 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.10s 2025-07-06 19:52:42.932477 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-07-06 19:52:42.933115 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-07-06 19:52:42.933682 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-07-06 19:52:42.934203 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-07-06 19:52:42.934524 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-07-06 19:52:42.935372 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-07-06 19:52:42.935573 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2025-07-06 19:52:43.440716 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-06 19:52:43.479470 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-06 19:52:43.479619 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-06 19:52:43.568414 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 167 0 --:--:-- --:--:-- --:--:-- 168 2025-07-06 19:52:43.589540 | orchestrator | + osism apply --environment custom workarounds 2025-07-06 19:52:45.243971 | orchestrator | 2025-07-06 19:52:45 | INFO  | Trying to run play workarounds in environment custom 2025-07-06 19:52:45.248567 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:52:45.248623 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:52:45.248635 | orchestrator | Registering Redlock._release_script 2025-07-06 19:52:45.307880 | orchestrator | 2025-07-06 19:52:45 | INFO  | Task e02e7d9b-6023-40e0-8cb8-7c6e383dd42c (workarounds) was prepared for execution. 2025-07-06 19:52:45.308074 | orchestrator | 2025-07-06 19:52:45 | INFO  | It takes a moment until task e02e7d9b-6023-40e0-8cb8-7c6e383dd42c (workarounds) has been started and output is visible here. 2025-07-06 19:52:49.146310 | orchestrator | 2025-07-06 19:52:49.146422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 19:52:49.146455 | orchestrator | 2025-07-06 19:52:49.147913 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-06 19:52:49.148972 | orchestrator | Sunday 06 July 2025 19:52:49 +0000 (0:00:00.141) 0:00:00.141 *********** 2025-07-06 19:52:49.305288 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-06 19:52:49.387715 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-06 19:52:49.468697 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-06 19:52:49.550696 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-06 19:52:49.733008 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-06 19:52:49.863564 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-06 19:52:49.863726 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-06 19:52:49.864712 | orchestrator | 2025-07-06 19:52:49.865283 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-06 19:52:49.866298 | orchestrator | 2025-07-06 19:52:49.866644 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-06 19:52:49.867305 | orchestrator | Sunday 06 July 2025 19:52:49 +0000 (0:00:00.724) 0:00:00.865 *********** 2025-07-06 19:52:51.912533 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:51.914213 | orchestrator | 2025-07-06 19:52:51.914482 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-06 19:52:51.916517 | orchestrator | 2025-07-06 19:52:51.917702 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-06 19:52:51.918833 | orchestrator | Sunday 06 July 2025 19:52:51 +0000 (0:00:02.045) 0:00:02.911 *********** 2025-07-06 19:52:53.705213 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:53.706524 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:53.707403 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:53.708381 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:53.709830 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:53.711173 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:53.712012 | orchestrator | 2025-07-06 19:52:53.713010 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-06 19:52:53.713596 | orchestrator | 2025-07-06 19:52:53.714686 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-06 19:52:53.715221 | orchestrator | Sunday 06 July 2025 19:52:53 +0000 (0:00:01.792) 0:00:04.703 *********** 2025-07-06 19:52:55.200509 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:52:55.203681 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:52:55.205227 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:52:55.206113 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:52:55.207590 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:52:55.208473 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:52:55.209341 | orchestrator | 2025-07-06 19:52:55.210008 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-06 19:52:55.210370 | orchestrator | Sunday 06 July 2025 19:52:55 +0000 (0:00:01.491) 0:00:06.195 *********** 2025-07-06 19:52:58.969658 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:58.970616 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:58.970675 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:58.973645 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:58.973843 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:58.974858 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:58.975343 | orchestrator | 2025-07-06 19:52:58.975566 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-06 19:52:58.976312 | orchestrator | Sunday 06 July 2025 19:52:58 +0000 (0:00:03.764) 0:00:09.959 *********** 2025-07-06 19:52:59.142397 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:59.218186 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:59.294569 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:59.370342 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:59.652633 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:59.652832 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:59.653985 | orchestrator | 2025-07-06 19:52:59.654771 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-06 19:52:59.656011 | orchestrator | 2025-07-06 19:52:59.656751 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-06 19:52:59.657522 | orchestrator | Sunday 06 July 2025 19:52:59 +0000 (0:00:00.691) 0:00:10.651 *********** 2025-07-06 19:53:01.294365 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:01.294509 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:01.295162 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:01.297203 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:01.298095 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:01.299313 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:01.300390 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:01.300531 | orchestrator | 2025-07-06 19:53:01.301260 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-06 19:53:01.302105 | orchestrator | Sunday 06 July 2025 19:53:01 +0000 (0:00:01.641) 0:00:12.293 *********** 2025-07-06 19:53:02.968319 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:02.969983 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:02.974144 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:02.974196 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:02.974209 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:02.974488 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:02.975351 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:02.975871 | orchestrator | 2025-07-06 19:53:02.976548 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-06 19:53:02.977114 | orchestrator | Sunday 06 July 2025 19:53:02 +0000 (0:00:01.669) 0:00:13.963 *********** 2025-07-06 19:53:04.466412 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:04.466539 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:04.466557 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:04.466584 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:04.467397 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:04.467621 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:04.471394 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:04.471428 | orchestrator | 2025-07-06 19:53:04.471437 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-06 19:53:04.471445 | orchestrator | Sunday 06 July 2025 19:53:04 +0000 (0:00:01.502) 0:00:15.465 *********** 2025-07-06 19:53:06.234369 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:06.235032 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:06.236418 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:06.237185 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:06.237890 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:06.240339 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:06.240368 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:06.240726 | orchestrator | 2025-07-06 19:53:06.242070 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-06 19:53:06.242565 | orchestrator | Sunday 06 July 2025 19:53:06 +0000 (0:00:01.766) 0:00:17.232 *********** 2025-07-06 19:53:06.417228 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:53:06.498213 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:06.576043 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:06.651207 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:06.729364 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:06.860587 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:06.860761 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:06.862330 | orchestrator | 2025-07-06 19:53:06.865769 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-06 19:53:06.865824 | orchestrator | 2025-07-06 19:53:06.865837 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-06 19:53:06.865848 | orchestrator | Sunday 06 July 2025 19:53:06 +0000 (0:00:00.629) 0:00:17.861 *********** 2025-07-06 19:53:09.509229 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:09.510427 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:09.511423 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:09.513026 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:09.514101 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:09.514507 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:09.515459 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:09.516193 | orchestrator | 2025-07-06 19:53:09.516868 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:53:09.517336 | orchestrator | 2025-07-06 19:53:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:53:09.517439 | orchestrator | 2025-07-06 19:53:09 | INFO  | Please wait and do not abort execution. 2025-07-06 19:53:09.518226 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:53:09.518720 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:09.519317 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:09.519686 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:09.520254 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:09.520565 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:09.521060 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:09.521391 | orchestrator | 2025-07-06 19:53:09.521827 | orchestrator | 2025-07-06 19:53:09.522186 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:53:09.522581 | orchestrator | Sunday 06 July 2025 19:53:09 +0000 (0:00:02.646) 0:00:20.508 *********** 2025-07-06 19:53:09.523105 | orchestrator | =============================================================================== 2025-07-06 19:53:09.523510 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-07-06 19:53:09.523863 | orchestrator | Install python3-docker -------------------------------------------------- 2.65s 2025-07-06 19:53:09.524345 | orchestrator | Apply netplan configuration --------------------------------------------- 2.05s 2025-07-06 19:53:09.524678 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-07-06 19:53:09.525197 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-07-06 19:53:09.525653 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.67s 2025-07-06 19:53:09.526078 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-07-06 19:53:09.526456 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-07-06 19:53:09.526838 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-07-06 19:53:09.528288 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.72s 2025-07-06 19:53:09.528811 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-07-06 19:53:09.528849 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-07-06 19:53:10.152439 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-06 19:53:11.788456 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:53:11.788560 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:53:11.788576 | orchestrator | Registering Redlock._release_script 2025-07-06 19:53:11.848587 | orchestrator | 2025-07-06 19:53:11 | INFO  | Task ecb996d5-26fa-43dd-8489-ce77c50b64d0 (reboot) was prepared for execution. 2025-07-06 19:53:11.848695 | orchestrator | 2025-07-06 19:53:11 | INFO  | It takes a moment until task ecb996d5-26fa-43dd-8489-ce77c50b64d0 (reboot) has been started and output is visible here. 2025-07-06 19:53:15.809366 | orchestrator | 2025-07-06 19:53:15.810610 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:53:15.811596 | orchestrator | 2025-07-06 19:53:15.813113 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:53:15.813711 | orchestrator | Sunday 06 July 2025 19:53:15 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-07-06 19:53:15.918518 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:15.918682 | orchestrator | 2025-07-06 19:53:15.919264 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:53:15.920190 | orchestrator | Sunday 06 July 2025 19:53:15 +0000 (0:00:00.112) 0:00:00.332 *********** 2025-07-06 19:53:16.848668 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:16.848971 | orchestrator | 2025-07-06 19:53:16.849673 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:53:16.850444 | orchestrator | Sunday 06 July 2025 19:53:16 +0000 (0:00:00.930) 0:00:01.262 *********** 2025-07-06 19:53:16.958468 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:16.959233 | orchestrator | 2025-07-06 19:53:16.960082 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:53:16.961809 | orchestrator | 2025-07-06 19:53:16.962483 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:53:16.962636 | orchestrator | Sunday 06 July 2025 19:53:16 +0000 (0:00:00.110) 0:00:01.373 *********** 2025-07-06 19:53:17.065606 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:17.065818 | orchestrator | 2025-07-06 19:53:17.066759 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:53:17.068138 | orchestrator | Sunday 06 July 2025 19:53:17 +0000 (0:00:00.106) 0:00:01.479 *********** 2025-07-06 19:53:17.724772 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:17.725110 | orchestrator | 2025-07-06 19:53:17.727134 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:53:17.728525 | orchestrator | Sunday 06 July 2025 19:53:17 +0000 (0:00:00.659) 0:00:02.139 *********** 2025-07-06 19:53:17.837699 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:17.838684 | orchestrator | 2025-07-06 19:53:17.841320 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:53:17.841350 | orchestrator | 2025-07-06 19:53:17.842217 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:53:17.843867 | orchestrator | Sunday 06 July 2025 19:53:17 +0000 (0:00:00.112) 0:00:02.252 *********** 2025-07-06 19:53:18.042753 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:18.042911 | orchestrator | 2025-07-06 19:53:18.044915 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:53:18.045542 | orchestrator | Sunday 06 July 2025 19:53:18 +0000 (0:00:00.203) 0:00:02.455 *********** 2025-07-06 19:53:18.694825 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:18.695499 | orchestrator | 2025-07-06 19:53:18.696862 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:53:18.697598 | orchestrator | Sunday 06 July 2025 19:53:18 +0000 (0:00:00.653) 0:00:03.108 *********** 2025-07-06 19:53:18.814791 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:18.816320 | orchestrator | 2025-07-06 19:53:18.818874 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:53:18.820197 | orchestrator | 2025-07-06 19:53:18.821129 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:53:18.821505 | orchestrator | Sunday 06 July 2025 19:53:18 +0000 (0:00:00.120) 0:00:03.229 *********** 2025-07-06 19:53:18.918767 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:18.919075 | orchestrator | 2025-07-06 19:53:18.920747 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:53:18.921581 | orchestrator | Sunday 06 July 2025 19:53:18 +0000 (0:00:00.103) 0:00:03.332 *********** 2025-07-06 19:53:19.578255 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:19.578516 | orchestrator | 2025-07-06 19:53:19.579430 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:53:19.580910 | orchestrator | Sunday 06 July 2025 19:53:19 +0000 (0:00:00.659) 0:00:03.992 *********** 2025-07-06 19:53:19.702380 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:19.702989 | orchestrator | 2025-07-06 19:53:19.703691 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:53:19.705570 | orchestrator | 2025-07-06 19:53:19.705609 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:53:19.706511 | orchestrator | Sunday 06 July 2025 19:53:19 +0000 (0:00:00.124) 0:00:04.116 *********** 2025-07-06 19:53:19.835545 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:19.835755 | orchestrator | 2025-07-06 19:53:19.837016 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:53:19.838669 | orchestrator | Sunday 06 July 2025 19:53:19 +0000 (0:00:00.133) 0:00:04.249 *********** 2025-07-06 19:53:20.474565 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:20.474664 | orchestrator | 2025-07-06 19:53:20.475559 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:53:20.476231 | orchestrator | Sunday 06 July 2025 19:53:20 +0000 (0:00:00.637) 0:00:04.887 *********** 2025-07-06 19:53:20.585481 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:20.586140 | orchestrator | 2025-07-06 19:53:20.587063 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:53:20.587424 | orchestrator | 2025-07-06 19:53:20.589630 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:53:20.589730 | orchestrator | Sunday 06 July 2025 19:53:20 +0000 (0:00:00.111) 0:00:04.998 *********** 2025-07-06 19:53:20.686305 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:20.687266 | orchestrator | 2025-07-06 19:53:20.688304 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:53:20.689467 | orchestrator | Sunday 06 July 2025 19:53:20 +0000 (0:00:00.101) 0:00:05.100 *********** 2025-07-06 19:53:21.331621 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:21.331904 | orchestrator | 2025-07-06 19:53:21.331930 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:53:21.332340 | orchestrator | Sunday 06 July 2025 19:53:21 +0000 (0:00:00.644) 0:00:05.744 *********** 2025-07-06 19:53:21.372533 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:21.372726 | orchestrator | 2025-07-06 19:53:21.373365 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:53:21.373614 | orchestrator | 2025-07-06 19:53:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:53:21.373681 | orchestrator | 2025-07-06 19:53:21 | INFO  | Please wait and do not abort execution. 2025-07-06 19:53:21.374554 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:21.375279 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:21.375390 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:21.376078 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:21.376332 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:21.376976 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:53:21.378215 | orchestrator | 2025-07-06 19:53:21.378911 | orchestrator | 2025-07-06 19:53:21.379552 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:53:21.380198 | orchestrator | Sunday 06 July 2025 19:53:21 +0000 (0:00:00.042) 0:00:05.787 *********** 2025-07-06 19:53:21.380619 | orchestrator | =============================================================================== 2025-07-06 19:53:21.381256 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.18s 2025-07-06 19:53:21.381773 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-07-06 19:53:21.383248 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-07-06 19:53:21.927017 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-06 19:53:23.634581 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:53:23.634688 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:53:23.634705 | orchestrator | Registering Redlock._release_script 2025-07-06 19:53:23.692776 | orchestrator | 2025-07-06 19:53:23 | INFO  | Task 7c33df18-ab5d-487f-ba4f-81a64e48773e (wait-for-connection) was prepared for execution. 2025-07-06 19:53:23.692869 | orchestrator | 2025-07-06 19:53:23 | INFO  | It takes a moment until task 7c33df18-ab5d-487f-ba4f-81a64e48773e (wait-for-connection) has been started and output is visible here. 2025-07-06 19:53:27.680204 | orchestrator | 2025-07-06 19:53:27.680324 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-06 19:53:27.680340 | orchestrator | 2025-07-06 19:53:27.680352 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-06 19:53:27.681097 | orchestrator | Sunday 06 July 2025 19:53:27 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-07-06 19:53:40.296071 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:40.296194 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:40.296273 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:40.297666 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:40.298298 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:40.299165 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:40.299654 | orchestrator | 2025-07-06 19:53:40.300422 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:53:40.300832 | orchestrator | 2025-07-06 19:53:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:53:40.301208 | orchestrator | 2025-07-06 19:53:40 | INFO  | Please wait and do not abort execution. 2025-07-06 19:53:40.301603 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:53:40.302309 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:53:40.302647 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:53:40.303086 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:53:40.304133 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:53:40.304921 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:53:40.305388 | orchestrator | 2025-07-06 19:53:40.306141 | orchestrator | 2025-07-06 19:53:40.306546 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:53:40.307292 | orchestrator | Sunday 06 July 2025 19:53:40 +0000 (0:00:12.619) 0:00:12.870 *********** 2025-07-06 19:53:40.307743 | orchestrator | =============================================================================== 2025-07-06 19:53:40.308509 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.62s 2025-07-06 19:53:41.025619 | orchestrator | + osism apply hddtemp 2025-07-06 19:53:42.662406 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:53:42.662506 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:53:42.662521 | orchestrator | Registering Redlock._release_script 2025-07-06 19:53:42.718560 | orchestrator | 2025-07-06 19:53:42 | INFO  | Task 72935220-dc9b-4302-abfe-ab87904a19c3 (hddtemp) was prepared for execution. 2025-07-06 19:53:42.718652 | orchestrator | 2025-07-06 19:53:42 | INFO  | It takes a moment until task 72935220-dc9b-4302-abfe-ab87904a19c3 (hddtemp) has been started and output is visible here. 2025-07-06 19:53:47.038307 | orchestrator | 2025-07-06 19:53:47.038423 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-06 19:53:47.039112 | orchestrator | 2025-07-06 19:53:47.039139 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-06 19:53:47.039240 | orchestrator | Sunday 06 July 2025 19:53:47 +0000 (0:00:00.295) 0:00:00.295 *********** 2025-07-06 19:53:47.185490 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:47.260900 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:47.340566 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:47.415350 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:47.594300 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:47.720334 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:47.722221 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:47.723082 | orchestrator | 2025-07-06 19:53:47.723742 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-06 19:53:47.724519 | orchestrator | Sunday 06 July 2025 19:53:47 +0000 (0:00:00.684) 0:00:00.979 *********** 2025-07-06 19:53:48.872099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:53:48.872682 | orchestrator | 2025-07-06 19:53:48.873046 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-06 19:53:48.873646 | orchestrator | Sunday 06 July 2025 19:53:48 +0000 (0:00:01.152) 0:00:02.132 *********** 2025-07-06 19:53:50.788160 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:50.789639 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:50.792496 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:50.792548 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:50.795602 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:50.797248 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:50.798388 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:50.801702 | orchestrator | 2025-07-06 19:53:50.801938 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-06 19:53:50.803446 | orchestrator | Sunday 06 July 2025 19:53:50 +0000 (0:00:01.916) 0:00:04.049 *********** 2025-07-06 19:53:51.531611 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:51.631409 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:52.068587 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:52.069576 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:52.069684 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:52.070867 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:52.072638 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:52.073701 | orchestrator | 2025-07-06 19:53:52.074507 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-06 19:53:52.075212 | orchestrator | Sunday 06 July 2025 19:53:52 +0000 (0:00:01.277) 0:00:05.326 *********** 2025-07-06 19:53:53.248038 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:53.248670 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:53.250168 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:53.251470 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:53.252420 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:53.253152 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:53.253767 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:53.254711 | orchestrator | 2025-07-06 19:53:53.255414 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-06 19:53:53.255853 | orchestrator | Sunday 06 July 2025 19:53:53 +0000 (0:00:01.183) 0:00:06.510 *********** 2025-07-06 19:53:53.685008 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:53.771919 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:53.852020 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:53.936849 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:54.066380 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:54.066872 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:54.067041 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:54.068173 | orchestrator | 2025-07-06 19:53:54.068461 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-06 19:53:54.071654 | orchestrator | Sunday 06 July 2025 19:53:54 +0000 (0:00:00.816) 0:00:07.326 *********** 2025-07-06 19:54:06.372180 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:06.372304 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:54:06.376000 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:54:06.379236 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:54:06.381408 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:54:06.382601 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:54:06.383235 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:54:06.383879 | orchestrator | 2025-07-06 19:54:06.384556 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-06 19:54:06.384772 | orchestrator | Sunday 06 July 2025 19:54:06 +0000 (0:00:12.306) 0:00:19.633 *********** 2025-07-06 19:54:07.756804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:54:07.756944 | orchestrator | 2025-07-06 19:54:07.757628 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-06 19:54:07.758369 | orchestrator | Sunday 06 July 2025 19:54:07 +0000 (0:00:01.382) 0:00:21.016 *********** 2025-07-06 19:54:09.669777 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:09.670138 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:54:09.672048 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:54:09.673922 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:54:09.674901 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:54:09.675929 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:54:09.677064 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:54:09.678362 | orchestrator | 2025-07-06 19:54:09.680111 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:54:09.680156 | orchestrator | 2025-07-06 19:54:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:54:09.680172 | orchestrator | 2025-07-06 19:54:09 | INFO  | Please wait and do not abort execution. 2025-07-06 19:54:09.681514 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:54:09.682669 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:54:09.683612 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:54:09.684106 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:54:09.684828 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:54:09.685273 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:54:09.686081 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:54:09.686933 | orchestrator | 2025-07-06 19:54:09.687604 | orchestrator | 2025-07-06 19:54:09.688008 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:54:09.689294 | orchestrator | Sunday 06 July 2025 19:54:09 +0000 (0:00:01.915) 0:00:22.932 *********** 2025-07-06 19:54:09.689781 | orchestrator | =============================================================================== 2025-07-06 19:54:09.690545 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.31s 2025-07-06 19:54:09.690864 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-07-06 19:54:09.691540 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2025-07-06 19:54:09.691949 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-07-06 19:54:09.692900 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.28s 2025-07-06 19:54:09.693842 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2025-07-06 19:54:09.694074 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.15s 2025-07-06 19:54:09.694675 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-07-06 19:54:09.694903 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-07-06 19:54:10.323559 | orchestrator | ++ semver 9.1.0 7.1.1 2025-07-06 19:54:10.377466 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-06 19:54:10.377568 | orchestrator | + sudo systemctl restart manager.service 2025-07-06 19:54:23.725346 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-06 19:54:23.725439 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-06 19:54:23.725451 | orchestrator | + local max_attempts=60 2025-07-06 19:54:23.725460 | orchestrator | + local name=ceph-ansible 2025-07-06 19:54:23.725469 | orchestrator | + local attempt_num=1 2025-07-06 19:54:23.725478 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:23.762834 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:23.762915 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:23.762925 | orchestrator | + sleep 5 2025-07-06 19:54:28.767110 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:28.804741 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:28.804843 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:28.804859 | orchestrator | + sleep 5 2025-07-06 19:54:33.808171 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:33.846148 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:33.846243 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:33.846257 | orchestrator | + sleep 5 2025-07-06 19:54:38.850445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:38.887380 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:38.887494 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:38.887518 | orchestrator | + sleep 5 2025-07-06 19:54:43.892511 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:43.934066 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:43.934164 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:43.934179 | orchestrator | + sleep 5 2025-07-06 19:54:48.939343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:48.979898 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:48.980081 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:48.980102 | orchestrator | + sleep 5 2025-07-06 19:54:53.984389 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:54.022605 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:54.022706 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:54.022721 | orchestrator | + sleep 5 2025-07-06 19:54:59.029130 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:54:59.066525 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:54:59.066604 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:54:59.066613 | orchestrator | + sleep 5 2025-07-06 19:55:04.071215 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:55:04.109114 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:04.109236 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:55:04.109260 | orchestrator | + sleep 5 2025-07-06 19:55:09.113097 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:55:09.150695 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:09.150810 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:55:09.150825 | orchestrator | + sleep 5 2025-07-06 19:55:14.154865 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:55:14.192968 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:14.193108 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:55:14.193125 | orchestrator | + sleep 5 2025-07-06 19:55:19.196723 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:55:19.232086 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:19.232194 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:55:19.232211 | orchestrator | + sleep 5 2025-07-06 19:55:24.236811 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:55:24.271808 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:24.271906 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:55:24.271921 | orchestrator | + sleep 5 2025-07-06 19:55:29.275765 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:55:29.315611 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:29.315710 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-06 19:55:29.315751 | orchestrator | + local max_attempts=60 2025-07-06 19:55:29.315760 | orchestrator | + local name=kolla-ansible 2025-07-06 19:55:29.315768 | orchestrator | + local attempt_num=1 2025-07-06 19:55:29.315833 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-06 19:55:29.355188 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:29.355292 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-06 19:55:29.355309 | orchestrator | + local max_attempts=60 2025-07-06 19:55:29.355321 | orchestrator | + local name=osism-ansible 2025-07-06 19:55:29.355332 | orchestrator | + local attempt_num=1 2025-07-06 19:55:29.355414 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-06 19:55:29.386380 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:55:29.386475 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-06 19:55:29.386490 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-06 19:55:29.566268 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-06 19:55:29.720761 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-06 19:55:29.861320 | orchestrator | ARA in osism-ansible already disabled. 2025-07-06 19:55:29.994259 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-06 19:55:29.995434 | orchestrator | + osism apply gather-facts 2025-07-06 19:55:31.650879 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:55:31.650973 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:55:31.650986 | orchestrator | Registering Redlock._release_script 2025-07-06 19:55:31.717297 | orchestrator | 2025-07-06 19:55:31 | INFO  | Task 6fd3f328-1fc6-452f-9583-fdca842af385 (gather-facts) was prepared for execution. 2025-07-06 19:55:31.717386 | orchestrator | 2025-07-06 19:55:31 | INFO  | It takes a moment until task 6fd3f328-1fc6-452f-9583-fdca842af385 (gather-facts) has been started and output is visible here. 2025-07-06 19:55:35.284865 | orchestrator | 2025-07-06 19:55:35.285690 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:55:35.287889 | orchestrator | 2025-07-06 19:55:35.289133 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:55:35.289650 | orchestrator | Sunday 06 July 2025 19:55:35 +0000 (0:00:00.194) 0:00:00.194 *********** 2025-07-06 19:55:41.632449 | orchestrator | ok: [testbed-manager] 2025-07-06 19:55:41.634900 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:55:41.636297 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:55:41.637837 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:55:41.638306 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:55:41.639637 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:55:41.639970 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:55:41.640646 | orchestrator | 2025-07-06 19:55:41.640671 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 19:55:41.641256 | orchestrator | 2025-07-06 19:55:41.641470 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 19:55:41.641882 | orchestrator | Sunday 06 July 2025 19:55:41 +0000 (0:00:06.351) 0:00:06.546 *********** 2025-07-06 19:55:41.779100 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:55:41.852629 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:55:41.924968 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:55:41.997344 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:55:42.070813 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:55:42.102345 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:55:42.102446 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:55:42.103556 | orchestrator | 2025-07-06 19:55:42.103654 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:55:42.104568 | orchestrator | 2025-07-06 19:55:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:55:42.104603 | orchestrator | 2025-07-06 19:55:42 | INFO  | Please wait and do not abort execution. 2025-07-06 19:55:42.104838 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.105703 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.108278 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.108567 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.109317 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.109700 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.110190 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:42.110218 | orchestrator | 2025-07-06 19:55:42.110353 | orchestrator | 2025-07-06 19:55:42.110818 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:55:42.111153 | orchestrator | Sunday 06 July 2025 19:55:42 +0000 (0:00:00.472) 0:00:07.018 *********** 2025-07-06 19:55:42.115269 | orchestrator | =============================================================================== 2025-07-06 19:55:42.115347 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.35s 2025-07-06 19:55:42.115533 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-07-06 19:55:42.717637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-06 19:55:42.729266 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-06 19:55:42.738916 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-06 19:55:42.748322 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-06 19:55:42.758449 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-06 19:55:42.767798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-06 19:55:42.777112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-06 19:55:42.786336 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-06 19:55:42.798467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-06 19:55:42.810085 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-06 19:55:42.826641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-06 19:55:42.838179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-06 19:55:42.849980 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-06 19:55:42.866486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-06 19:55:42.876514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-06 19:55:42.888071 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-06 19:55:42.905702 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-06 19:55:42.918721 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-06 19:55:42.936774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-06 19:55:42.956426 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-06 19:55:42.975338 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-06 19:55:43.463013 | orchestrator | ok: Runtime: 0:20:18.738497 2025-07-06 19:55:43.559821 | 2025-07-06 19:55:43.559953 | TASK [Deploy services] 2025-07-06 19:55:44.094166 | orchestrator | skipping: Conditional result was False 2025-07-06 19:55:44.112302 | 2025-07-06 19:55:44.112581 | TASK [Deploy in a nutshell] 2025-07-06 19:55:44.880410 | orchestrator | + set -e 2025-07-06 19:55:44.881915 | orchestrator | 2025-07-06 19:55:44.881975 | orchestrator | # PULL IMAGES 2025-07-06 19:55:44.881992 | orchestrator | 2025-07-06 19:55:44.882138 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 19:55:44.882169 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 19:55:44.882185 | orchestrator | ++ INTERACTIVE=false 2025-07-06 19:55:44.882232 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 19:55:44.882256 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 19:55:44.882271 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 19:55:44.882283 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 19:55:44.882302 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 19:55:44.882314 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 19:55:44.882332 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 19:55:44.882344 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 19:55:44.882362 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 19:55:44.882373 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 19:55:44.882388 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 19:55:44.882400 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 19:55:44.882417 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 19:55:44.882429 | orchestrator | ++ export ARA=false 2025-07-06 19:55:44.882440 | orchestrator | ++ ARA=false 2025-07-06 19:55:44.882451 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 19:55:44.882482 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 19:55:44.882493 | orchestrator | ++ export TEMPEST=false 2025-07-06 19:55:44.882504 | orchestrator | ++ TEMPEST=false 2025-07-06 19:55:44.882515 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 19:55:44.882526 | orchestrator | ++ IS_ZUUL=true 2025-07-06 19:55:44.882537 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:55:44.882549 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 19:55:44.882560 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 19:55:44.882571 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 19:55:44.882582 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 19:55:44.882594 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 19:55:44.882605 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 19:55:44.882616 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 19:55:44.882627 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 19:55:44.882651 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 19:55:44.882671 | orchestrator | + echo 2025-07-06 19:55:44.882690 | orchestrator | + echo '# PULL IMAGES' 2025-07-06 19:55:44.882709 | orchestrator | + echo 2025-07-06 19:55:44.882740 | orchestrator | ++ semver 9.1.0 7.0.0 2025-07-06 19:55:44.940666 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-06 19:55:44.940760 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-06 19:55:46.622175 | orchestrator | 2025-07-06 19:55:46 | INFO  | Trying to run play pull-images in environment custom 2025-07-06 19:55:46.627061 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:55:46.627199 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:55:46.627233 | orchestrator | Registering Redlock._release_script 2025-07-06 19:55:46.689852 | orchestrator | 2025-07-06 19:55:46 | INFO  | Task 85954588-7cbd-4445-9676-cbaed5da4900 (pull-images) was prepared for execution. 2025-07-06 19:55:46.689960 | orchestrator | 2025-07-06 19:55:46 | INFO  | It takes a moment until task 85954588-7cbd-4445-9676-cbaed5da4900 (pull-images) has been started and output is visible here. 2025-07-06 19:55:50.518496 | orchestrator | 2025-07-06 19:55:50.518618 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-06 19:55:50.518637 | orchestrator | 2025-07-06 19:55:50.518650 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-06 19:55:50.518674 | orchestrator | Sunday 06 July 2025 19:55:50 +0000 (0:00:00.134) 0:00:00.134 *********** 2025-07-06 19:56:55.331357 | orchestrator | changed: [testbed-manager] 2025-07-06 19:56:55.331477 | orchestrator | 2025-07-06 19:56:55.331494 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-06 19:56:55.331507 | orchestrator | Sunday 06 July 2025 19:56:55 +0000 (0:01:04.812) 0:01:04.947 *********** 2025-07-06 19:57:48.255558 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-06 19:57:48.255689 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-06 19:57:48.256479 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-06 19:57:48.256753 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-06 19:57:48.257370 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-06 19:57:48.258746 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-06 19:57:48.259059 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-06 19:57:48.259663 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-06 19:57:48.260899 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-06 19:57:48.261114 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-06 19:57:48.261964 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-06 19:57:48.263437 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-06 19:57:48.264193 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-06 19:57:48.264874 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-06 19:57:48.266848 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-06 19:57:48.268057 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-06 19:57:48.268646 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-06 19:57:48.269295 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-06 19:57:48.269635 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-06 19:57:48.270239 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-06 19:57:48.270717 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-06 19:57:48.272506 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-06 19:57:48.272738 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-06 19:57:48.273112 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-06 19:57:48.274332 | orchestrator | 2025-07-06 19:57:48.275218 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:57:48.275475 | orchestrator | 2025-07-06 19:57:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:57:48.275921 | orchestrator | 2025-07-06 19:57:48 | INFO  | Please wait and do not abort execution. 2025-07-06 19:57:48.278957 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:57:48.279400 | orchestrator | 2025-07-06 19:57:48.280621 | orchestrator | 2025-07-06 19:57:48.281965 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:57:48.282976 | orchestrator | Sunday 06 July 2025 19:57:48 +0000 (0:00:52.924) 0:01:57.871 *********** 2025-07-06 19:57:48.284738 | orchestrator | =============================================================================== 2025-07-06 19:57:48.285424 | orchestrator | Pull keystone image ---------------------------------------------------- 64.81s 2025-07-06 19:57:48.285922 | orchestrator | Pull other images ------------------------------------------------------ 52.92s 2025-07-06 19:57:50.551872 | orchestrator | 2025-07-06 19:57:50 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-06 19:57:50.556543 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:57:50.556588 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:57:50.556600 | orchestrator | Registering Redlock._release_script 2025-07-06 19:57:50.612572 | orchestrator | 2025-07-06 19:57:50 | INFO  | Task 901fc668-9cd4-46b6-8137-6ab08bfd988d (wipe-partitions) was prepared for execution. 2025-07-06 19:57:50.612664 | orchestrator | 2025-07-06 19:57:50 | INFO  | It takes a moment until task 901fc668-9cd4-46b6-8137-6ab08bfd988d (wipe-partitions) has been started and output is visible here. 2025-07-06 19:57:54.031365 | orchestrator | 2025-07-06 19:57:54.031500 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-06 19:57:54.034092 | orchestrator | 2025-07-06 19:57:54.034782 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-06 19:57:54.036145 | orchestrator | Sunday 06 July 2025 19:57:54 +0000 (0:00:00.105) 0:00:00.105 *********** 2025-07-06 19:57:54.614328 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:57:54.614452 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:57:54.614467 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:57:54.614478 | orchestrator | 2025-07-06 19:57:54.614490 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-06 19:57:54.614502 | orchestrator | Sunday 06 July 2025 19:57:54 +0000 (0:00:00.581) 0:00:00.686 *********** 2025-07-06 19:57:54.786300 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:57:54.913559 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:57:54.913659 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:57:54.913674 | orchestrator | 2025-07-06 19:57:54.916875 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-06 19:57:54.917022 | orchestrator | Sunday 06 July 2025 19:57:54 +0000 (0:00:00.299) 0:00:00.985 *********** 2025-07-06 19:57:55.634348 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:57:55.634459 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:57:55.634474 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:57:55.634497 | orchestrator | 2025-07-06 19:57:55.634845 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-06 19:57:55.635058 | orchestrator | Sunday 06 July 2025 19:57:55 +0000 (0:00:00.721) 0:00:01.707 *********** 2025-07-06 19:57:55.795746 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:57:55.884424 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:57:55.884528 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:57:55.888008 | orchestrator | 2025-07-06 19:57:55.888527 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-06 19:57:55.892596 | orchestrator | Sunday 06 July 2025 19:57:55 +0000 (0:00:00.250) 0:00:01.958 *********** 2025-07-06 19:57:57.139597 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-06 19:57:57.139705 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-06 19:57:57.139720 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-06 19:57:57.139846 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-06 19:57:57.140248 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-06 19:57:57.140829 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-06 19:57:57.140943 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-06 19:57:57.141703 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-06 19:57:57.141970 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-06 19:57:57.142333 | orchestrator | 2025-07-06 19:57:57.142618 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-06 19:57:57.143739 | orchestrator | Sunday 06 July 2025 19:57:57 +0000 (0:00:01.257) 0:00:03.215 *********** 2025-07-06 19:57:58.455142 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-06 19:57:58.455308 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-06 19:57:58.455328 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-06 19:57:58.455405 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-06 19:57:58.459184 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-06 19:57:58.459229 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-06 19:57:58.459560 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-06 19:57:58.459887 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-06 19:57:58.459970 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-06 19:57:58.460377 | orchestrator | 2025-07-06 19:57:58.460589 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-06 19:57:58.460802 | orchestrator | Sunday 06 July 2025 19:57:58 +0000 (0:00:01.312) 0:00:04.527 *********** 2025-07-06 19:58:01.586170 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-06 19:58:01.586807 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-06 19:58:01.587911 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-06 19:58:01.589384 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-06 19:58:01.589882 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-06 19:58:01.590749 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-06 19:58:01.591579 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-06 19:58:01.592194 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-06 19:58:01.592748 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-06 19:58:01.593681 | orchestrator | 2025-07-06 19:58:01.594146 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-06 19:58:01.594534 | orchestrator | Sunday 06 July 2025 19:58:01 +0000 (0:00:03.132) 0:00:07.660 *********** 2025-07-06 19:58:02.168747 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:58:02.168877 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:58:02.171997 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:58:02.172238 | orchestrator | 2025-07-06 19:58:02.176709 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-06 19:58:02.176992 | orchestrator | Sunday 06 July 2025 19:58:02 +0000 (0:00:00.581) 0:00:08.241 *********** 2025-07-06 19:58:02.756632 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:58:02.758461 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:58:02.758562 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:58:02.758587 | orchestrator | 2025-07-06 19:58:02.760432 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:58:02.760800 | orchestrator | 2025-07-06 19:58:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:58:02.761265 | orchestrator | 2025-07-06 19:58:02 | INFO  | Please wait and do not abort execution. 2025-07-06 19:58:02.764056 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:02.764666 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:02.765227 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:02.765810 | orchestrator | 2025-07-06 19:58:02.766245 | orchestrator | 2025-07-06 19:58:02.767047 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:58:02.767466 | orchestrator | Sunday 06 July 2025 19:58:02 +0000 (0:00:00.587) 0:00:08.830 *********** 2025-07-06 19:58:02.767797 | orchestrator | =============================================================================== 2025-07-06 19:58:02.768497 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.13s 2025-07-06 19:58:02.768822 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2025-07-06 19:58:02.772008 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2025-07-06 19:58:02.772071 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.72s 2025-07-06 19:58:02.772749 | orchestrator | Request device events from the kernel ----------------------------------- 0.59s 2025-07-06 19:58:02.773362 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-07-06 19:58:02.773929 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-07-06 19:58:02.774765 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2025-07-06 19:58:02.775486 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-07-06 19:58:05.175189 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:58:05.175293 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:58:05.175308 | orchestrator | Registering Redlock._release_script 2025-07-06 19:58:05.235729 | orchestrator | 2025-07-06 19:58:05 | INFO  | Task 5d2b4daa-ad26-4517-95e0-c444f1f5554a (facts) was prepared for execution. 2025-07-06 19:58:05.236355 | orchestrator | 2025-07-06 19:58:05 | INFO  | It takes a moment until task 5d2b4daa-ad26-4517-95e0-c444f1f5554a (facts) has been started and output is visible here. 2025-07-06 19:58:08.829439 | orchestrator | 2025-07-06 19:58:08.829585 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-06 19:58:08.829603 | orchestrator | 2025-07-06 19:58:08.829727 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 19:58:08.829746 | orchestrator | Sunday 06 July 2025 19:58:08 +0000 (0:00:00.248) 0:00:00.248 *********** 2025-07-06 19:58:09.661986 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:58:10.108896 | orchestrator | ok: [testbed-manager] 2025-07-06 19:58:10.112964 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:58:10.113046 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:58:10.113200 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:58:10.113354 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:58:10.113882 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:58:10.113976 | orchestrator | 2025-07-06 19:58:10.114214 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 19:58:10.114413 | orchestrator | Sunday 06 July 2025 19:58:10 +0000 (0:00:01.279) 0:00:01.527 *********** 2025-07-06 19:58:10.253262 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:58:10.336695 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:58:10.415217 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:58:10.478367 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:58:10.533987 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:11.075449 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:11.075853 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:11.076000 | orchestrator | 2025-07-06 19:58:11.076249 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:58:11.076497 | orchestrator | 2025-07-06 19:58:11.076760 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:58:11.076991 | orchestrator | Sunday 06 July 2025 19:58:11 +0000 (0:00:00.970) 0:00:02.497 *********** 2025-07-06 19:58:13.256094 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:58:17.231156 | orchestrator | ok: [testbed-manager] 2025-07-06 19:58:17.231269 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:58:17.232439 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:58:17.232831 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:58:17.238177 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:58:17.238720 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:58:17.239419 | orchestrator | 2025-07-06 19:58:17.239929 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 19:58:17.241189 | orchestrator | 2025-07-06 19:58:17.241736 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 19:58:17.242465 | orchestrator | Sunday 06 July 2025 19:58:17 +0000 (0:00:06.154) 0:00:08.651 *********** 2025-07-06 19:58:17.389942 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:58:17.471541 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:58:17.546580 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:58:17.624123 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:58:17.699333 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:17.743156 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:17.743973 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:17.744483 | orchestrator | 2025-07-06 19:58:17.746528 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:58:17.746626 | orchestrator | 2025-07-06 19:58:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:58:17.746644 | orchestrator | 2025-07-06 19:58:17 | INFO  | Please wait and do not abort execution. 2025-07-06 19:58:17.747301 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.748428 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.749173 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.749831 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.750512 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.751240 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.752224 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:58:17.752871 | orchestrator | 2025-07-06 19:58:17.753548 | orchestrator | 2025-07-06 19:58:17.754140 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:58:17.754773 | orchestrator | Sunday 06 July 2025 19:58:17 +0000 (0:00:00.512) 0:00:09.164 *********** 2025-07-06 19:58:17.755544 | orchestrator | =============================================================================== 2025-07-06 19:58:17.755879 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.15s 2025-07-06 19:58:17.756814 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.28s 2025-07-06 19:58:17.757819 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.97s 2025-07-06 19:58:17.758214 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-07-06 19:58:20.253988 | orchestrator | 2025-07-06 19:58:20 | INFO  | Task 43f3c749-6d26-408a-8ee3-c260feb142cc (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-06 19:58:20.254230 | orchestrator | 2025-07-06 19:58:20 | INFO  | It takes a moment until task 43f3c749-6d26-408a-8ee3-c260feb142cc (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-06 19:58:24.436756 | orchestrator | 2025-07-06 19:58:24.438779 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-06 19:58:24.439889 | orchestrator | 2025-07-06 19:58:24.441194 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 19:58:24.442161 | orchestrator | Sunday 06 July 2025 19:58:24 +0000 (0:00:00.315) 0:00:00.315 *********** 2025-07-06 19:58:24.730658 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 19:58:24.731958 | orchestrator | 2025-07-06 19:58:24.732623 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 19:58:24.733262 | orchestrator | Sunday 06 July 2025 19:58:24 +0000 (0:00:00.298) 0:00:00.613 *********** 2025-07-06 19:58:25.028498 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:58:25.030090 | orchestrator | 2025-07-06 19:58:25.032139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:25.032171 | orchestrator | Sunday 06 July 2025 19:58:25 +0000 (0:00:00.296) 0:00:00.910 *********** 2025-07-06 19:58:25.463740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-06 19:58:25.463977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-06 19:58:25.465419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-06 19:58:25.466910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-06 19:58:25.468138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-06 19:58:25.474834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-06 19:58:25.474893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-06 19:58:25.474935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-06 19:58:25.475636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-06 19:58:25.478389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-06 19:58:25.479278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-06 19:58:25.479832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-06 19:58:25.483227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-06 19:58:25.483253 | orchestrator | 2025-07-06 19:58:25.486555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:25.491531 | orchestrator | Sunday 06 July 2025 19:58:25 +0000 (0:00:00.426) 0:00:01.336 *********** 2025-07-06 19:58:26.337625 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:26.339103 | orchestrator | 2025-07-06 19:58:26.341474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:26.342230 | orchestrator | Sunday 06 July 2025 19:58:26 +0000 (0:00:00.884) 0:00:02.221 *********** 2025-07-06 19:58:26.534067 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:26.539469 | orchestrator | 2025-07-06 19:58:26.539920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:26.542157 | orchestrator | Sunday 06 July 2025 19:58:26 +0000 (0:00:00.194) 0:00:02.415 *********** 2025-07-06 19:58:26.741786 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:26.744511 | orchestrator | 2025-07-06 19:58:26.745931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:26.747652 | orchestrator | Sunday 06 July 2025 19:58:26 +0000 (0:00:00.206) 0:00:02.622 *********** 2025-07-06 19:58:26.946347 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:26.946446 | orchestrator | 2025-07-06 19:58:26.947353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:26.947398 | orchestrator | Sunday 06 July 2025 19:58:26 +0000 (0:00:00.207) 0:00:02.830 *********** 2025-07-06 19:58:27.174830 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:27.175517 | orchestrator | 2025-07-06 19:58:27.176771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:27.177618 | orchestrator | Sunday 06 July 2025 19:58:27 +0000 (0:00:00.228) 0:00:03.058 *********** 2025-07-06 19:58:27.384913 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:27.387203 | orchestrator | 2025-07-06 19:58:27.387766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:27.390249 | orchestrator | Sunday 06 July 2025 19:58:27 +0000 (0:00:00.207) 0:00:03.266 *********** 2025-07-06 19:58:27.624405 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:27.624697 | orchestrator | 2025-07-06 19:58:27.626819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:27.630875 | orchestrator | Sunday 06 July 2025 19:58:27 +0000 (0:00:00.240) 0:00:03.506 *********** 2025-07-06 19:58:27.817339 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:27.819805 | orchestrator | 2025-07-06 19:58:27.820635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:27.822180 | orchestrator | Sunday 06 July 2025 19:58:27 +0000 (0:00:00.194) 0:00:03.701 *********** 2025-07-06 19:58:28.293951 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b) 2025-07-06 19:58:28.295227 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b) 2025-07-06 19:58:28.296530 | orchestrator | 2025-07-06 19:58:28.297380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:28.298138 | orchestrator | Sunday 06 July 2025 19:58:28 +0000 (0:00:00.476) 0:00:04.178 *********** 2025-07-06 19:58:28.714416 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960) 2025-07-06 19:58:28.716266 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960) 2025-07-06 19:58:28.717662 | orchestrator | 2025-07-06 19:58:28.719536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:28.720368 | orchestrator | Sunday 06 July 2025 19:58:28 +0000 (0:00:00.418) 0:00:04.596 *********** 2025-07-06 19:58:29.342508 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d) 2025-07-06 19:58:29.343351 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d) 2025-07-06 19:58:29.344806 | orchestrator | 2025-07-06 19:58:29.345870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:29.347335 | orchestrator | Sunday 06 July 2025 19:58:29 +0000 (0:00:00.629) 0:00:05.226 *********** 2025-07-06 19:58:29.938502 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1) 2025-07-06 19:58:29.939126 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1) 2025-07-06 19:58:29.943035 | orchestrator | 2025-07-06 19:58:29.943113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:29.943129 | orchestrator | Sunday 06 July 2025 19:58:29 +0000 (0:00:00.595) 0:00:05.821 *********** 2025-07-06 19:58:30.687791 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 19:58:30.689458 | orchestrator | 2025-07-06 19:58:30.690927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:30.691906 | orchestrator | Sunday 06 July 2025 19:58:30 +0000 (0:00:00.745) 0:00:06.567 *********** 2025-07-06 19:58:31.054955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-06 19:58:31.056264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-06 19:58:31.059023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-06 19:58:31.059086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-06 19:58:31.059110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-06 19:58:31.059128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-06 19:58:31.060056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-06 19:58:31.060473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-06 19:58:31.061169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-06 19:58:31.061870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-06 19:58:31.062732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-06 19:58:31.063708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-06 19:58:31.063766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-06 19:58:31.064195 | orchestrator | 2025-07-06 19:58:31.064762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:31.065233 | orchestrator | Sunday 06 July 2025 19:58:31 +0000 (0:00:00.370) 0:00:06.937 *********** 2025-07-06 19:58:31.257083 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:31.257189 | orchestrator | 2025-07-06 19:58:31.257517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:31.258067 | orchestrator | Sunday 06 July 2025 19:58:31 +0000 (0:00:00.202) 0:00:07.140 *********** 2025-07-06 19:58:31.470430 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:31.471181 | orchestrator | 2025-07-06 19:58:31.471852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:31.472683 | orchestrator | Sunday 06 July 2025 19:58:31 +0000 (0:00:00.214) 0:00:07.354 *********** 2025-07-06 19:58:31.677563 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:31.678299 | orchestrator | 2025-07-06 19:58:31.680469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:31.681504 | orchestrator | Sunday 06 July 2025 19:58:31 +0000 (0:00:00.206) 0:00:07.560 *********** 2025-07-06 19:58:31.893096 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:31.893202 | orchestrator | 2025-07-06 19:58:31.893768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:31.894514 | orchestrator | Sunday 06 July 2025 19:58:31 +0000 (0:00:00.214) 0:00:07.775 *********** 2025-07-06 19:58:32.084388 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:32.085063 | orchestrator | 2025-07-06 19:58:32.085463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:32.086411 | orchestrator | Sunday 06 July 2025 19:58:32 +0000 (0:00:00.192) 0:00:07.968 *********** 2025-07-06 19:58:32.292464 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:32.293405 | orchestrator | 2025-07-06 19:58:32.293439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:32.294116 | orchestrator | Sunday 06 July 2025 19:58:32 +0000 (0:00:00.208) 0:00:08.176 *********** 2025-07-06 19:58:32.509708 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:32.511806 | orchestrator | 2025-07-06 19:58:32.514368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:32.514427 | orchestrator | Sunday 06 July 2025 19:58:32 +0000 (0:00:00.216) 0:00:08.392 *********** 2025-07-06 19:58:32.734680 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:32.734781 | orchestrator | 2025-07-06 19:58:32.738153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:32.739145 | orchestrator | Sunday 06 July 2025 19:58:32 +0000 (0:00:00.224) 0:00:08.616 *********** 2025-07-06 19:58:33.771617 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-06 19:58:33.772947 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-06 19:58:33.773816 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-06 19:58:33.776936 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-06 19:58:33.777983 | orchestrator | 2025-07-06 19:58:33.778846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:33.780596 | orchestrator | Sunday 06 July 2025 19:58:33 +0000 (0:00:01.037) 0:00:09.654 *********** 2025-07-06 19:58:33.972759 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:33.974105 | orchestrator | 2025-07-06 19:58:33.974953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:33.975801 | orchestrator | Sunday 06 July 2025 19:58:33 +0000 (0:00:00.202) 0:00:09.857 *********** 2025-07-06 19:58:34.185093 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:34.185219 | orchestrator | 2025-07-06 19:58:34.186074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:34.186925 | orchestrator | Sunday 06 July 2025 19:58:34 +0000 (0:00:00.209) 0:00:10.066 *********** 2025-07-06 19:58:34.402188 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:34.402596 | orchestrator | 2025-07-06 19:58:34.403061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:34.404170 | orchestrator | Sunday 06 July 2025 19:58:34 +0000 (0:00:00.219) 0:00:10.286 *********** 2025-07-06 19:58:34.630773 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:34.636934 | orchestrator | 2025-07-06 19:58:34.637630 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-06 19:58:34.638638 | orchestrator | Sunday 06 July 2025 19:58:34 +0000 (0:00:00.226) 0:00:10.513 *********** 2025-07-06 19:58:34.841189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-06 19:58:34.841466 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-06 19:58:34.842669 | orchestrator | 2025-07-06 19:58:34.843384 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-06 19:58:34.846762 | orchestrator | Sunday 06 July 2025 19:58:34 +0000 (0:00:00.209) 0:00:10.723 *********** 2025-07-06 19:58:35.013934 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:35.014274 | orchestrator | 2025-07-06 19:58:35.016623 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-06 19:58:35.016669 | orchestrator | Sunday 06 July 2025 19:58:35 +0000 (0:00:00.171) 0:00:10.894 *********** 2025-07-06 19:58:35.163579 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:35.164864 | orchestrator | 2025-07-06 19:58:35.166601 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-06 19:58:35.167706 | orchestrator | Sunday 06 July 2025 19:58:35 +0000 (0:00:00.153) 0:00:11.047 *********** 2025-07-06 19:58:35.341509 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:35.341708 | orchestrator | 2025-07-06 19:58:35.342585 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-06 19:58:35.344174 | orchestrator | Sunday 06 July 2025 19:58:35 +0000 (0:00:00.176) 0:00:11.224 *********** 2025-07-06 19:58:35.497777 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:58:35.499794 | orchestrator | 2025-07-06 19:58:35.499880 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-06 19:58:35.500096 | orchestrator | Sunday 06 July 2025 19:58:35 +0000 (0:00:00.157) 0:00:11.382 *********** 2025-07-06 19:58:35.690719 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5b3ebdad-89cb-5093-adb4-41e3a34848e3'}}) 2025-07-06 19:58:35.691200 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67620618-3322-5703-9264-076cb24f91fa'}}) 2025-07-06 19:58:35.693736 | orchestrator | 2025-07-06 19:58:35.693773 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-06 19:58:35.693787 | orchestrator | Sunday 06 July 2025 19:58:35 +0000 (0:00:00.192) 0:00:11.574 *********** 2025-07-06 19:58:35.845108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5b3ebdad-89cb-5093-adb4-41e3a34848e3'}})  2025-07-06 19:58:35.845223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67620618-3322-5703-9264-076cb24f91fa'}})  2025-07-06 19:58:35.845239 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:35.845252 | orchestrator | 2025-07-06 19:58:35.845669 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-06 19:58:35.846655 | orchestrator | Sunday 06 July 2025 19:58:35 +0000 (0:00:00.152) 0:00:11.726 *********** 2025-07-06 19:58:36.311441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5b3ebdad-89cb-5093-adb4-41e3a34848e3'}})  2025-07-06 19:58:36.311641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67620618-3322-5703-9264-076cb24f91fa'}})  2025-07-06 19:58:36.312930 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:36.314701 | orchestrator | 2025-07-06 19:58:36.315875 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-06 19:58:36.318841 | orchestrator | Sunday 06 July 2025 19:58:36 +0000 (0:00:00.469) 0:00:12.196 *********** 2025-07-06 19:58:36.472404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5b3ebdad-89cb-5093-adb4-41e3a34848e3'}})  2025-07-06 19:58:36.476728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67620618-3322-5703-9264-076cb24f91fa'}})  2025-07-06 19:58:36.477469 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:36.481067 | orchestrator | 2025-07-06 19:58:36.481793 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-06 19:58:36.487678 | orchestrator | Sunday 06 July 2025 19:58:36 +0000 (0:00:00.158) 0:00:12.354 *********** 2025-07-06 19:58:36.626358 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:58:36.626771 | orchestrator | 2025-07-06 19:58:36.628641 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-06 19:58:36.629945 | orchestrator | Sunday 06 July 2025 19:58:36 +0000 (0:00:00.154) 0:00:12.509 *********** 2025-07-06 19:58:36.768848 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:58:36.770274 | orchestrator | 2025-07-06 19:58:36.770695 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-06 19:58:36.770873 | orchestrator | Sunday 06 July 2025 19:58:36 +0000 (0:00:00.144) 0:00:12.653 *********** 2025-07-06 19:58:36.913321 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:36.914209 | orchestrator | 2025-07-06 19:58:36.915450 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-06 19:58:36.917072 | orchestrator | Sunday 06 July 2025 19:58:36 +0000 (0:00:00.142) 0:00:12.796 *********** 2025-07-06 19:58:37.050507 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:37.052290 | orchestrator | 2025-07-06 19:58:37.053575 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-06 19:58:37.058211 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.138) 0:00:12.934 *********** 2025-07-06 19:58:37.194079 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:37.198620 | orchestrator | 2025-07-06 19:58:37.199169 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-06 19:58:37.201331 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.142) 0:00:13.077 *********** 2025-07-06 19:58:37.353237 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 19:58:37.353405 | orchestrator |  "ceph_osd_devices": { 2025-07-06 19:58:37.358741 | orchestrator |  "sdb": { 2025-07-06 19:58:37.358767 | orchestrator |  "osd_lvm_uuid": "5b3ebdad-89cb-5093-adb4-41e3a34848e3" 2025-07-06 19:58:37.358778 | orchestrator |  }, 2025-07-06 19:58:37.358788 | orchestrator |  "sdc": { 2025-07-06 19:58:37.358798 | orchestrator |  "osd_lvm_uuid": "67620618-3322-5703-9264-076cb24f91fa" 2025-07-06 19:58:37.358808 | orchestrator |  } 2025-07-06 19:58:37.358818 | orchestrator |  } 2025-07-06 19:58:37.358828 | orchestrator | } 2025-07-06 19:58:37.359208 | orchestrator | 2025-07-06 19:58:37.359812 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-06 19:58:37.360533 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.158) 0:00:13.235 *********** 2025-07-06 19:58:37.485748 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:37.488656 | orchestrator | 2025-07-06 19:58:37.488834 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-06 19:58:37.489271 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.133) 0:00:13.369 *********** 2025-07-06 19:58:37.597861 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:37.598477 | orchestrator | 2025-07-06 19:58:37.598583 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-06 19:58:37.599049 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.113) 0:00:13.482 *********** 2025-07-06 19:58:37.704869 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:58:37.705464 | orchestrator | 2025-07-06 19:58:37.705564 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-06 19:58:37.706396 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.107) 0:00:13.590 *********** 2025-07-06 19:58:37.885519 | orchestrator | changed: [testbed-node-3] => { 2025-07-06 19:58:37.886756 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-06 19:58:37.889818 | orchestrator |  "ceph_osd_devices": { 2025-07-06 19:58:37.890263 | orchestrator |  "sdb": { 2025-07-06 19:58:37.890914 | orchestrator |  "osd_lvm_uuid": "5b3ebdad-89cb-5093-adb4-41e3a34848e3" 2025-07-06 19:58:37.891811 | orchestrator |  }, 2025-07-06 19:58:37.892456 | orchestrator |  "sdc": { 2025-07-06 19:58:37.892990 | orchestrator |  "osd_lvm_uuid": "67620618-3322-5703-9264-076cb24f91fa" 2025-07-06 19:58:37.893873 | orchestrator |  } 2025-07-06 19:58:37.894122 | orchestrator |  }, 2025-07-06 19:58:37.894414 | orchestrator |  "lvm_volumes": [ 2025-07-06 19:58:37.895617 | orchestrator |  { 2025-07-06 19:58:37.896260 | orchestrator |  "data": "osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3", 2025-07-06 19:58:37.896431 | orchestrator |  "data_vg": "ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3" 2025-07-06 19:58:37.899007 | orchestrator |  }, 2025-07-06 19:58:37.899106 | orchestrator |  { 2025-07-06 19:58:37.899787 | orchestrator |  "data": "osd-block-67620618-3322-5703-9264-076cb24f91fa", 2025-07-06 19:58:37.899890 | orchestrator |  "data_vg": "ceph-67620618-3322-5703-9264-076cb24f91fa" 2025-07-06 19:58:37.900389 | orchestrator |  } 2025-07-06 19:58:37.900794 | orchestrator |  ] 2025-07-06 19:58:37.900916 | orchestrator |  } 2025-07-06 19:58:37.901327 | orchestrator | } 2025-07-06 19:58:37.902088 | orchestrator | 2025-07-06 19:58:37.902116 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-06 19:58:37.902191 | orchestrator | Sunday 06 July 2025 19:58:37 +0000 (0:00:00.180) 0:00:13.770 *********** 2025-07-06 19:58:39.765247 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 19:58:39.767250 | orchestrator | 2025-07-06 19:58:39.767676 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-06 19:58:39.769526 | orchestrator | 2025-07-06 19:58:39.770143 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 19:58:39.771151 | orchestrator | Sunday 06 July 2025 19:58:39 +0000 (0:00:01.878) 0:00:15.648 *********** 2025-07-06 19:58:39.986627 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-06 19:58:39.990157 | orchestrator | 2025-07-06 19:58:39.991017 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 19:58:39.992109 | orchestrator | Sunday 06 July 2025 19:58:39 +0000 (0:00:00.222) 0:00:15.871 *********** 2025-07-06 19:58:40.196563 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:58:40.196742 | orchestrator | 2025-07-06 19:58:40.198201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:40.199608 | orchestrator | Sunday 06 July 2025 19:58:40 +0000 (0:00:00.209) 0:00:16.080 *********** 2025-07-06 19:58:40.515085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-06 19:58:40.517219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-06 19:58:40.518927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-06 19:58:40.520259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-06 19:58:40.521671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-06 19:58:40.522410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-06 19:58:40.523772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-06 19:58:40.525421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-06 19:58:40.526137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-06 19:58:40.526728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-06 19:58:40.527436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-06 19:58:40.527904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-06 19:58:40.528632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-06 19:58:40.529463 | orchestrator | 2025-07-06 19:58:40.530347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:40.530567 | orchestrator | Sunday 06 July 2025 19:58:40 +0000 (0:00:00.317) 0:00:16.397 *********** 2025-07-06 19:58:40.714171 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:40.714727 | orchestrator | 2025-07-06 19:58:40.715122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:40.716070 | orchestrator | Sunday 06 July 2025 19:58:40 +0000 (0:00:00.199) 0:00:16.597 *********** 2025-07-06 19:58:40.887992 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:40.888096 | orchestrator | 2025-07-06 19:58:40.888481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:40.889504 | orchestrator | Sunday 06 July 2025 19:58:40 +0000 (0:00:00.173) 0:00:16.770 *********** 2025-07-06 19:58:41.063996 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:41.064852 | orchestrator | 2025-07-06 19:58:41.068410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:41.070865 | orchestrator | Sunday 06 July 2025 19:58:41 +0000 (0:00:00.178) 0:00:16.948 *********** 2025-07-06 19:58:41.245729 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:41.245834 | orchestrator | 2025-07-06 19:58:41.247781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:41.248394 | orchestrator | Sunday 06 July 2025 19:58:41 +0000 (0:00:00.179) 0:00:17.128 *********** 2025-07-06 19:58:41.753848 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:41.754140 | orchestrator | 2025-07-06 19:58:41.755286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:41.759076 | orchestrator | Sunday 06 July 2025 19:58:41 +0000 (0:00:00.509) 0:00:17.637 *********** 2025-07-06 19:58:41.930135 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:41.930425 | orchestrator | 2025-07-06 19:58:41.934514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:41.935177 | orchestrator | Sunday 06 July 2025 19:58:41 +0000 (0:00:00.174) 0:00:17.812 *********** 2025-07-06 19:58:42.118394 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:42.119016 | orchestrator | 2025-07-06 19:58:42.120567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:42.124378 | orchestrator | Sunday 06 July 2025 19:58:42 +0000 (0:00:00.190) 0:00:18.003 *********** 2025-07-06 19:58:42.303548 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:42.304820 | orchestrator | 2025-07-06 19:58:42.305591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:42.309397 | orchestrator | Sunday 06 July 2025 19:58:42 +0000 (0:00:00.184) 0:00:18.188 *********** 2025-07-06 19:58:42.667855 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e) 2025-07-06 19:58:42.668734 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e) 2025-07-06 19:58:42.669715 | orchestrator | 2025-07-06 19:58:42.670462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:42.674360 | orchestrator | Sunday 06 July 2025 19:58:42 +0000 (0:00:00.364) 0:00:18.552 *********** 2025-07-06 19:58:43.048981 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c) 2025-07-06 19:58:43.049153 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c) 2025-07-06 19:58:43.049690 | orchestrator | 2025-07-06 19:58:43.050558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:43.053895 | orchestrator | Sunday 06 July 2025 19:58:43 +0000 (0:00:00.380) 0:00:18.933 *********** 2025-07-06 19:58:43.440126 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2) 2025-07-06 19:58:43.441534 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2) 2025-07-06 19:58:43.443004 | orchestrator | 2025-07-06 19:58:43.443172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:43.446547 | orchestrator | Sunday 06 July 2025 19:58:43 +0000 (0:00:00.392) 0:00:19.325 *********** 2025-07-06 19:58:43.865628 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155) 2025-07-06 19:58:43.867808 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155) 2025-07-06 19:58:43.868461 | orchestrator | 2025-07-06 19:58:43.869833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:43.870557 | orchestrator | Sunday 06 July 2025 19:58:43 +0000 (0:00:00.423) 0:00:19.749 *********** 2025-07-06 19:58:44.185363 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 19:58:44.185456 | orchestrator | 2025-07-06 19:58:44.188768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:44.189723 | orchestrator | Sunday 06 July 2025 19:58:44 +0000 (0:00:00.317) 0:00:20.066 *********** 2025-07-06 19:58:44.529839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-06 19:58:44.531843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-06 19:58:44.532604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-06 19:58:44.536361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-06 19:58:44.537380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-06 19:58:44.538348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-06 19:58:44.542187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-06 19:58:44.543266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-06 19:58:44.544286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-06 19:58:44.545127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-06 19:58:44.546084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-06 19:58:44.546786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-06 19:58:44.547460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-06 19:58:44.548241 | orchestrator | 2025-07-06 19:58:44.548634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:44.549237 | orchestrator | Sunday 06 July 2025 19:58:44 +0000 (0:00:00.347) 0:00:20.414 *********** 2025-07-06 19:58:44.728491 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:44.730880 | orchestrator | 2025-07-06 19:58:44.731017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:44.731036 | orchestrator | Sunday 06 July 2025 19:58:44 +0000 (0:00:00.198) 0:00:20.612 *********** 2025-07-06 19:58:45.255086 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:45.255220 | orchestrator | 2025-07-06 19:58:45.258238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:45.258897 | orchestrator | Sunday 06 July 2025 19:58:45 +0000 (0:00:00.525) 0:00:21.138 *********** 2025-07-06 19:58:45.431899 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:45.432103 | orchestrator | 2025-07-06 19:58:45.432132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:45.432598 | orchestrator | Sunday 06 July 2025 19:58:45 +0000 (0:00:00.176) 0:00:21.315 *********** 2025-07-06 19:58:45.606579 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:45.607041 | orchestrator | 2025-07-06 19:58:45.607399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:45.608207 | orchestrator | Sunday 06 July 2025 19:58:45 +0000 (0:00:00.176) 0:00:21.491 *********** 2025-07-06 19:58:45.784518 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:45.785664 | orchestrator | 2025-07-06 19:58:45.789889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:45.791366 | orchestrator | Sunday 06 July 2025 19:58:45 +0000 (0:00:00.177) 0:00:21.669 *********** 2025-07-06 19:58:45.977789 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:45.977907 | orchestrator | 2025-07-06 19:58:45.977989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:45.978999 | orchestrator | Sunday 06 July 2025 19:58:45 +0000 (0:00:00.190) 0:00:21.860 *********** 2025-07-06 19:58:46.169514 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:46.170677 | orchestrator | 2025-07-06 19:58:46.172655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:46.172789 | orchestrator | Sunday 06 July 2025 19:58:46 +0000 (0:00:00.193) 0:00:22.053 *********** 2025-07-06 19:58:46.343415 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:46.343601 | orchestrator | 2025-07-06 19:58:46.344945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:46.347648 | orchestrator | Sunday 06 July 2025 19:58:46 +0000 (0:00:00.174) 0:00:22.228 *********** 2025-07-06 19:58:46.967111 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-06 19:58:46.968327 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-06 19:58:46.970461 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-06 19:58:46.971658 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-06 19:58:46.972509 | orchestrator | 2025-07-06 19:58:46.973238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:46.975599 | orchestrator | Sunday 06 July 2025 19:58:46 +0000 (0:00:00.622) 0:00:22.850 *********** 2025-07-06 19:58:47.170427 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:47.173626 | orchestrator | 2025-07-06 19:58:47.173752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:47.176892 | orchestrator | Sunday 06 July 2025 19:58:47 +0000 (0:00:00.204) 0:00:23.054 *********** 2025-07-06 19:58:47.368397 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:47.368652 | orchestrator | 2025-07-06 19:58:47.369760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:47.371072 | orchestrator | Sunday 06 July 2025 19:58:47 +0000 (0:00:00.196) 0:00:23.251 *********** 2025-07-06 19:58:47.572147 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:47.573108 | orchestrator | 2025-07-06 19:58:47.574323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:47.575222 | orchestrator | Sunday 06 July 2025 19:58:47 +0000 (0:00:00.204) 0:00:23.455 *********** 2025-07-06 19:58:47.779642 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:47.780626 | orchestrator | 2025-07-06 19:58:47.781571 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-06 19:58:47.787594 | orchestrator | Sunday 06 July 2025 19:58:47 +0000 (0:00:00.206) 0:00:23.662 *********** 2025-07-06 19:58:48.122091 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-06 19:58:48.123726 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-06 19:58:48.124671 | orchestrator | 2025-07-06 19:58:48.126557 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-06 19:58:48.127205 | orchestrator | Sunday 06 July 2025 19:58:48 +0000 (0:00:00.341) 0:00:24.004 *********** 2025-07-06 19:58:48.257719 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:48.259259 | orchestrator | 2025-07-06 19:58:48.263209 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-06 19:58:48.263296 | orchestrator | Sunday 06 July 2025 19:58:48 +0000 (0:00:00.136) 0:00:24.141 *********** 2025-07-06 19:58:48.399507 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:48.401015 | orchestrator | 2025-07-06 19:58:48.402385 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-06 19:58:48.403437 | orchestrator | Sunday 06 July 2025 19:58:48 +0000 (0:00:00.141) 0:00:24.282 *********** 2025-07-06 19:58:48.539583 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:48.540977 | orchestrator | 2025-07-06 19:58:48.542474 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-06 19:58:48.543635 | orchestrator | Sunday 06 July 2025 19:58:48 +0000 (0:00:00.140) 0:00:24.423 *********** 2025-07-06 19:58:48.693933 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:58:48.695644 | orchestrator | 2025-07-06 19:58:48.696926 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-06 19:58:48.698378 | orchestrator | Sunday 06 July 2025 19:58:48 +0000 (0:00:00.153) 0:00:24.576 *********** 2025-07-06 19:58:48.865153 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b2ac7c1-b26c-557b-8077-56c3cb59db23'}}) 2025-07-06 19:58:48.867038 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}}) 2025-07-06 19:58:48.871058 | orchestrator | 2025-07-06 19:58:48.871774 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-06 19:58:48.872859 | orchestrator | Sunday 06 July 2025 19:58:48 +0000 (0:00:00.171) 0:00:24.748 *********** 2025-07-06 19:58:49.028495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b2ac7c1-b26c-557b-8077-56c3cb59db23'}})  2025-07-06 19:58:49.033192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}})  2025-07-06 19:58:49.034339 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:49.035452 | orchestrator | 2025-07-06 19:58:49.036668 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-06 19:58:49.037672 | orchestrator | Sunday 06 July 2025 19:58:49 +0000 (0:00:00.160) 0:00:24.909 *********** 2025-07-06 19:58:49.196378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b2ac7c1-b26c-557b-8077-56c3cb59db23'}})  2025-07-06 19:58:49.198948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}})  2025-07-06 19:58:49.198998 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:49.199669 | orchestrator | 2025-07-06 19:58:49.201541 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-06 19:58:49.202477 | orchestrator | Sunday 06 July 2025 19:58:49 +0000 (0:00:00.168) 0:00:25.077 *********** 2025-07-06 19:58:49.347151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b2ac7c1-b26c-557b-8077-56c3cb59db23'}})  2025-07-06 19:58:49.349776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}})  2025-07-06 19:58:49.352708 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:49.352727 | orchestrator | 2025-07-06 19:58:49.353874 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-06 19:58:49.354810 | orchestrator | Sunday 06 July 2025 19:58:49 +0000 (0:00:00.150) 0:00:25.228 *********** 2025-07-06 19:58:49.477470 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:58:49.479864 | orchestrator | 2025-07-06 19:58:49.480995 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-06 19:58:49.483001 | orchestrator | Sunday 06 July 2025 19:58:49 +0000 (0:00:00.131) 0:00:25.360 *********** 2025-07-06 19:58:49.622995 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:58:49.625587 | orchestrator | 2025-07-06 19:58:49.625620 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-06 19:58:49.626109 | orchestrator | Sunday 06 July 2025 19:58:49 +0000 (0:00:00.145) 0:00:25.505 *********** 2025-07-06 19:58:49.755010 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:49.757019 | orchestrator | 2025-07-06 19:58:49.760723 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-06 19:58:49.760749 | orchestrator | Sunday 06 July 2025 19:58:49 +0000 (0:00:00.132) 0:00:25.638 *********** 2025-07-06 19:58:50.213553 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:50.215095 | orchestrator | 2025-07-06 19:58:50.215849 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-06 19:58:50.218156 | orchestrator | Sunday 06 July 2025 19:58:50 +0000 (0:00:00.459) 0:00:26.098 *********** 2025-07-06 19:58:50.369675 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:50.370179 | orchestrator | 2025-07-06 19:58:50.372388 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-06 19:58:50.372854 | orchestrator | Sunday 06 July 2025 19:58:50 +0000 (0:00:00.155) 0:00:26.253 *********** 2025-07-06 19:58:50.515532 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 19:58:50.515836 | orchestrator |  "ceph_osd_devices": { 2025-07-06 19:58:50.517012 | orchestrator |  "sdb": { 2025-07-06 19:58:50.517821 | orchestrator |  "osd_lvm_uuid": "6b2ac7c1-b26c-557b-8077-56c3cb59db23" 2025-07-06 19:58:50.519436 | orchestrator |  }, 2025-07-06 19:58:50.520208 | orchestrator |  "sdc": { 2025-07-06 19:58:50.521000 | orchestrator |  "osd_lvm_uuid": "e81f0ba1-e76a-5ac2-85fd-9d5b359e204d" 2025-07-06 19:58:50.524515 | orchestrator |  } 2025-07-06 19:58:50.524665 | orchestrator |  } 2025-07-06 19:58:50.525347 | orchestrator | } 2025-07-06 19:58:50.526320 | orchestrator | 2025-07-06 19:58:50.526857 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-06 19:58:50.527429 | orchestrator | Sunday 06 July 2025 19:58:50 +0000 (0:00:00.146) 0:00:26.399 *********** 2025-07-06 19:58:50.657183 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:50.658205 | orchestrator | 2025-07-06 19:58:50.659791 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-06 19:58:50.664177 | orchestrator | Sunday 06 July 2025 19:58:50 +0000 (0:00:00.140) 0:00:26.540 *********** 2025-07-06 19:58:50.805525 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:50.807080 | orchestrator | 2025-07-06 19:58:50.807750 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-06 19:58:50.809343 | orchestrator | Sunday 06 July 2025 19:58:50 +0000 (0:00:00.149) 0:00:26.689 *********** 2025-07-06 19:58:50.937335 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:58:50.937523 | orchestrator | 2025-07-06 19:58:50.938239 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-06 19:58:50.938758 | orchestrator | Sunday 06 July 2025 19:58:50 +0000 (0:00:00.132) 0:00:26.822 *********** 2025-07-06 19:58:51.130079 | orchestrator | changed: [testbed-node-4] => { 2025-07-06 19:58:51.131519 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-06 19:58:51.132928 | orchestrator |  "ceph_osd_devices": { 2025-07-06 19:58:51.134157 | orchestrator |  "sdb": { 2025-07-06 19:58:51.135105 | orchestrator |  "osd_lvm_uuid": "6b2ac7c1-b26c-557b-8077-56c3cb59db23" 2025-07-06 19:58:51.136365 | orchestrator |  }, 2025-07-06 19:58:51.137188 | orchestrator |  "sdc": { 2025-07-06 19:58:51.137880 | orchestrator |  "osd_lvm_uuid": "e81f0ba1-e76a-5ac2-85fd-9d5b359e204d" 2025-07-06 19:58:51.138519 | orchestrator |  } 2025-07-06 19:58:51.139123 | orchestrator |  }, 2025-07-06 19:58:51.139667 | orchestrator |  "lvm_volumes": [ 2025-07-06 19:58:51.140282 | orchestrator |  { 2025-07-06 19:58:51.140839 | orchestrator |  "data": "osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23", 2025-07-06 19:58:51.141044 | orchestrator |  "data_vg": "ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23" 2025-07-06 19:58:51.141477 | orchestrator |  }, 2025-07-06 19:58:51.141853 | orchestrator |  { 2025-07-06 19:58:51.142319 | orchestrator |  "data": "osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d", 2025-07-06 19:58:51.143400 | orchestrator |  "data_vg": "ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d" 2025-07-06 19:58:51.143609 | orchestrator |  } 2025-07-06 19:58:51.143859 | orchestrator |  ] 2025-07-06 19:58:51.144099 | orchestrator |  } 2025-07-06 19:58:51.144777 | orchestrator | } 2025-07-06 19:58:51.145095 | orchestrator | 2025-07-06 19:58:51.145756 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-06 19:58:51.146120 | orchestrator | Sunday 06 July 2025 19:58:51 +0000 (0:00:00.190) 0:00:27.013 *********** 2025-07-06 19:58:52.318252 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-06 19:58:52.319199 | orchestrator | 2025-07-06 19:58:52.320079 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-06 19:58:52.321185 | orchestrator | 2025-07-06 19:58:52.321988 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 19:58:52.322763 | orchestrator | Sunday 06 July 2025 19:58:52 +0000 (0:00:01.187) 0:00:28.200 *********** 2025-07-06 19:58:52.854178 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-06 19:58:52.854706 | orchestrator | 2025-07-06 19:58:52.855712 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 19:58:52.856491 | orchestrator | Sunday 06 July 2025 19:58:52 +0000 (0:00:00.536) 0:00:28.737 *********** 2025-07-06 19:58:53.614936 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:58:53.615041 | orchestrator | 2025-07-06 19:58:53.616858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:53.617186 | orchestrator | Sunday 06 July 2025 19:58:53 +0000 (0:00:00.758) 0:00:29.496 *********** 2025-07-06 19:58:53.969169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-06 19:58:53.970003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-06 19:58:53.972702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-06 19:58:53.973535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-06 19:58:53.974680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-06 19:58:53.975791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-06 19:58:53.977085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-06 19:58:53.977614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-06 19:58:53.978415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-06 19:58:53.979788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-06 19:58:53.980092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-06 19:58:53.981280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-06 19:58:53.982260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-06 19:58:53.982988 | orchestrator | 2025-07-06 19:58:53.983560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:53.984554 | orchestrator | Sunday 06 July 2025 19:58:53 +0000 (0:00:00.356) 0:00:29.853 *********** 2025-07-06 19:58:54.182209 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:54.184219 | orchestrator | 2025-07-06 19:58:54.185105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:54.185706 | orchestrator | Sunday 06 July 2025 19:58:54 +0000 (0:00:00.210) 0:00:30.063 *********** 2025-07-06 19:58:54.399846 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:54.401084 | orchestrator | 2025-07-06 19:58:54.402436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:54.405007 | orchestrator | Sunday 06 July 2025 19:58:54 +0000 (0:00:00.220) 0:00:30.283 *********** 2025-07-06 19:58:54.629084 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:54.630684 | orchestrator | 2025-07-06 19:58:54.633925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:54.633995 | orchestrator | Sunday 06 July 2025 19:58:54 +0000 (0:00:00.227) 0:00:30.511 *********** 2025-07-06 19:58:54.831791 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:54.833263 | orchestrator | 2025-07-06 19:58:54.834722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:54.835729 | orchestrator | Sunday 06 July 2025 19:58:54 +0000 (0:00:00.202) 0:00:30.713 *********** 2025-07-06 19:58:55.042351 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:55.043203 | orchestrator | 2025-07-06 19:58:55.044707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:55.047584 | orchestrator | Sunday 06 July 2025 19:58:55 +0000 (0:00:00.210) 0:00:30.924 *********** 2025-07-06 19:58:55.247812 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:55.248620 | orchestrator | 2025-07-06 19:58:55.253041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:55.253081 | orchestrator | Sunday 06 July 2025 19:58:55 +0000 (0:00:00.204) 0:00:31.128 *********** 2025-07-06 19:58:55.437414 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:55.439757 | orchestrator | 2025-07-06 19:58:55.441390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:55.442099 | orchestrator | Sunday 06 July 2025 19:58:55 +0000 (0:00:00.189) 0:00:31.318 *********** 2025-07-06 19:58:55.643778 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:55.644728 | orchestrator | 2025-07-06 19:58:55.645525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:55.646770 | orchestrator | Sunday 06 July 2025 19:58:55 +0000 (0:00:00.205) 0:00:31.524 *********** 2025-07-06 19:58:56.355313 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280) 2025-07-06 19:58:56.358323 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280) 2025-07-06 19:58:56.358363 | orchestrator | 2025-07-06 19:58:56.358941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:56.359571 | orchestrator | Sunday 06 July 2025 19:58:56 +0000 (0:00:00.712) 0:00:32.237 *********** 2025-07-06 19:58:57.200720 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7) 2025-07-06 19:58:57.200822 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7) 2025-07-06 19:58:57.203167 | orchestrator | 2025-07-06 19:58:57.203278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:57.204265 | orchestrator | Sunday 06 July 2025 19:58:57 +0000 (0:00:00.845) 0:00:33.083 *********** 2025-07-06 19:58:57.606259 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b) 2025-07-06 19:58:57.607032 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b) 2025-07-06 19:58:57.607619 | orchestrator | 2025-07-06 19:58:57.609193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:57.609543 | orchestrator | Sunday 06 July 2025 19:58:57 +0000 (0:00:00.407) 0:00:33.490 *********** 2025-07-06 19:58:58.031288 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c) 2025-07-06 19:58:58.031696 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c) 2025-07-06 19:58:58.032845 | orchestrator | 2025-07-06 19:58:58.034281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:58:58.034790 | orchestrator | Sunday 06 July 2025 19:58:58 +0000 (0:00:00.423) 0:00:33.914 *********** 2025-07-06 19:58:58.340101 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 19:58:58.340668 | orchestrator | 2025-07-06 19:58:58.341411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:58.342893 | orchestrator | Sunday 06 July 2025 19:58:58 +0000 (0:00:00.308) 0:00:34.223 *********** 2025-07-06 19:58:58.741054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-06 19:58:58.741601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-06 19:58:58.745677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-06 19:58:58.745720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-06 19:58:58.745738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-06 19:58:58.746578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-06 19:58:58.747680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-06 19:58:58.748447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-06 19:58:58.749047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-06 19:58:58.749442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-06 19:58:58.750152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-06 19:58:58.750464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-06 19:58:58.750966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-06 19:58:58.751576 | orchestrator | 2025-07-06 19:58:58.752083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:58.752441 | orchestrator | Sunday 06 July 2025 19:58:58 +0000 (0:00:00.401) 0:00:34.624 *********** 2025-07-06 19:58:58.957611 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:58.958849 | orchestrator | 2025-07-06 19:58:58.961801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:58.961846 | orchestrator | Sunday 06 July 2025 19:58:58 +0000 (0:00:00.216) 0:00:34.841 *********** 2025-07-06 19:58:59.183781 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:59.183941 | orchestrator | 2025-07-06 19:58:59.184789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:59.185005 | orchestrator | Sunday 06 July 2025 19:58:59 +0000 (0:00:00.225) 0:00:35.067 *********** 2025-07-06 19:58:59.384355 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:59.385488 | orchestrator | 2025-07-06 19:58:59.387146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:59.388086 | orchestrator | Sunday 06 July 2025 19:58:59 +0000 (0:00:00.199) 0:00:35.266 *********** 2025-07-06 19:58:59.579198 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:59.579421 | orchestrator | 2025-07-06 19:58:59.584749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:59.585658 | orchestrator | Sunday 06 July 2025 19:58:59 +0000 (0:00:00.196) 0:00:35.462 *********** 2025-07-06 19:58:59.774315 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:58:59.775736 | orchestrator | 2025-07-06 19:58:59.777260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:58:59.778520 | orchestrator | Sunday 06 July 2025 19:58:59 +0000 (0:00:00.195) 0:00:35.658 *********** 2025-07-06 19:59:00.632985 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:00.633154 | orchestrator | 2025-07-06 19:59:00.634079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:00.634716 | orchestrator | Sunday 06 July 2025 19:59:00 +0000 (0:00:00.857) 0:00:36.516 *********** 2025-07-06 19:59:00.870979 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:00.871953 | orchestrator | 2025-07-06 19:59:00.872873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:00.873620 | orchestrator | Sunday 06 July 2025 19:59:00 +0000 (0:00:00.238) 0:00:36.754 *********** 2025-07-06 19:59:01.108514 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:01.109147 | orchestrator | 2025-07-06 19:59:01.109447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:01.110065 | orchestrator | Sunday 06 July 2025 19:59:01 +0000 (0:00:00.238) 0:00:36.992 *********** 2025-07-06 19:59:01.942381 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-06 19:59:01.943838 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-06 19:59:01.944545 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-06 19:59:01.945289 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-06 19:59:01.946549 | orchestrator | 2025-07-06 19:59:01.947570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:01.948148 | orchestrator | Sunday 06 July 2025 19:59:01 +0000 (0:00:00.834) 0:00:37.826 *********** 2025-07-06 19:59:02.138001 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:02.140522 | orchestrator | 2025-07-06 19:59:02.141573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:02.143956 | orchestrator | Sunday 06 July 2025 19:59:02 +0000 (0:00:00.194) 0:00:38.021 *********** 2025-07-06 19:59:02.360146 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:02.361317 | orchestrator | 2025-07-06 19:59:02.362654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:02.363274 | orchestrator | Sunday 06 July 2025 19:59:02 +0000 (0:00:00.220) 0:00:38.241 *********** 2025-07-06 19:59:02.595782 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:02.596075 | orchestrator | 2025-07-06 19:59:02.597221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:02.598467 | orchestrator | Sunday 06 July 2025 19:59:02 +0000 (0:00:00.237) 0:00:38.479 *********** 2025-07-06 19:59:02.832534 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:02.833404 | orchestrator | 2025-07-06 19:59:02.834756 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-06 19:59:02.834788 | orchestrator | Sunday 06 July 2025 19:59:02 +0000 (0:00:00.236) 0:00:38.716 *********** 2025-07-06 19:59:03.027163 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-06 19:59:03.027744 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-06 19:59:03.028963 | orchestrator | 2025-07-06 19:59:03.030111 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-06 19:59:03.031184 | orchestrator | Sunday 06 July 2025 19:59:03 +0000 (0:00:00.195) 0:00:38.911 *********** 2025-07-06 19:59:03.176094 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:03.176820 | orchestrator | 2025-07-06 19:59:03.178215 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-06 19:59:03.179093 | orchestrator | Sunday 06 July 2025 19:59:03 +0000 (0:00:00.148) 0:00:39.060 *********** 2025-07-06 19:59:03.332059 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:03.332419 | orchestrator | 2025-07-06 19:59:03.333003 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-06 19:59:03.334130 | orchestrator | Sunday 06 July 2025 19:59:03 +0000 (0:00:00.154) 0:00:39.215 *********** 2025-07-06 19:59:03.476331 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:03.477219 | orchestrator | 2025-07-06 19:59:03.478625 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-06 19:59:03.479417 | orchestrator | Sunday 06 July 2025 19:59:03 +0000 (0:00:00.145) 0:00:39.360 *********** 2025-07-06 19:59:03.840101 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:59:03.841014 | orchestrator | 2025-07-06 19:59:03.843448 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-06 19:59:03.844783 | orchestrator | Sunday 06 July 2025 19:59:03 +0000 (0:00:00.362) 0:00:39.722 *********** 2025-07-06 19:59:04.028982 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}}) 2025-07-06 19:59:04.029745 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c6cf71a-fa39-576b-8a24-237c163534df'}}) 2025-07-06 19:59:04.030632 | orchestrator | 2025-07-06 19:59:04.031587 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-06 19:59:04.032098 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.190) 0:00:39.913 *********** 2025-07-06 19:59:04.200018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}})  2025-07-06 19:59:04.201333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c6cf71a-fa39-576b-8a24-237c163534df'}})  2025-07-06 19:59:04.202112 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:04.202999 | orchestrator | 2025-07-06 19:59:04.203635 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-06 19:59:04.204329 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.170) 0:00:40.084 *********** 2025-07-06 19:59:04.345984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}})  2025-07-06 19:59:04.346670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c6cf71a-fa39-576b-8a24-237c163534df'}})  2025-07-06 19:59:04.348223 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:04.349266 | orchestrator | 2025-07-06 19:59:04.350410 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-06 19:59:04.351723 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.143) 0:00:40.227 *********** 2025-07-06 19:59:04.506273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}})  2025-07-06 19:59:04.506371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c6cf71a-fa39-576b-8a24-237c163534df'}})  2025-07-06 19:59:04.506386 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:04.507428 | orchestrator | 2025-07-06 19:59:04.508094 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-06 19:59:04.508827 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.157) 0:00:40.385 *********** 2025-07-06 19:59:04.638303 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:59:04.639373 | orchestrator | 2025-07-06 19:59:04.640120 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-06 19:59:04.641968 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.136) 0:00:40.522 *********** 2025-07-06 19:59:04.778668 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:59:04.779771 | orchestrator | 2025-07-06 19:59:04.780805 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-06 19:59:04.781971 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.141) 0:00:40.663 *********** 2025-07-06 19:59:04.907108 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:04.908962 | orchestrator | 2025-07-06 19:59:04.909899 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-06 19:59:04.911078 | orchestrator | Sunday 06 July 2025 19:59:04 +0000 (0:00:00.127) 0:00:40.791 *********** 2025-07-06 19:59:05.029575 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:05.029953 | orchestrator | 2025-07-06 19:59:05.030548 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-06 19:59:05.031811 | orchestrator | Sunday 06 July 2025 19:59:05 +0000 (0:00:00.121) 0:00:40.912 *********** 2025-07-06 19:59:05.159010 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:05.159359 | orchestrator | 2025-07-06 19:59:05.160807 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-06 19:59:05.161785 | orchestrator | Sunday 06 July 2025 19:59:05 +0000 (0:00:00.130) 0:00:41.043 *********** 2025-07-06 19:59:05.337628 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 19:59:05.339719 | orchestrator |  "ceph_osd_devices": { 2025-07-06 19:59:05.343609 | orchestrator |  "sdb": { 2025-07-06 19:59:05.344399 | orchestrator |  "osd_lvm_uuid": "4472ae94-c442-5fee-95ac-d2e3b3e55ca4" 2025-07-06 19:59:05.345565 | orchestrator |  }, 2025-07-06 19:59:05.346792 | orchestrator |  "sdc": { 2025-07-06 19:59:05.347803 | orchestrator |  "osd_lvm_uuid": "8c6cf71a-fa39-576b-8a24-237c163534df" 2025-07-06 19:59:05.348810 | orchestrator |  } 2025-07-06 19:59:05.350117 | orchestrator |  } 2025-07-06 19:59:05.351165 | orchestrator | } 2025-07-06 19:59:05.352312 | orchestrator | 2025-07-06 19:59:05.352972 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-06 19:59:05.353958 | orchestrator | Sunday 06 July 2025 19:59:05 +0000 (0:00:00.177) 0:00:41.221 *********** 2025-07-06 19:59:05.474707 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:05.475125 | orchestrator | 2025-07-06 19:59:05.475887 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-06 19:59:05.477540 | orchestrator | Sunday 06 July 2025 19:59:05 +0000 (0:00:00.138) 0:00:41.359 *********** 2025-07-06 19:59:05.812573 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:05.813484 | orchestrator | 2025-07-06 19:59:05.814208 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-06 19:59:05.815477 | orchestrator | Sunday 06 July 2025 19:59:05 +0000 (0:00:00.337) 0:00:41.696 *********** 2025-07-06 19:59:05.953568 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:05.953706 | orchestrator | 2025-07-06 19:59:05.955879 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-06 19:59:05.957390 | orchestrator | Sunday 06 July 2025 19:59:05 +0000 (0:00:00.141) 0:00:41.837 *********** 2025-07-06 19:59:06.176582 | orchestrator | changed: [testbed-node-5] => { 2025-07-06 19:59:06.177309 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-06 19:59:06.178798 | orchestrator |  "ceph_osd_devices": { 2025-07-06 19:59:06.180046 | orchestrator |  "sdb": { 2025-07-06 19:59:06.181007 | orchestrator |  "osd_lvm_uuid": "4472ae94-c442-5fee-95ac-d2e3b3e55ca4" 2025-07-06 19:59:06.181880 | orchestrator |  }, 2025-07-06 19:59:06.182439 | orchestrator |  "sdc": { 2025-07-06 19:59:06.182740 | orchestrator |  "osd_lvm_uuid": "8c6cf71a-fa39-576b-8a24-237c163534df" 2025-07-06 19:59:06.183557 | orchestrator |  } 2025-07-06 19:59:06.184177 | orchestrator |  }, 2025-07-06 19:59:06.184766 | orchestrator |  "lvm_volumes": [ 2025-07-06 19:59:06.185318 | orchestrator |  { 2025-07-06 19:59:06.186222 | orchestrator |  "data": "osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4", 2025-07-06 19:59:06.186950 | orchestrator |  "data_vg": "ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4" 2025-07-06 19:59:06.187297 | orchestrator |  }, 2025-07-06 19:59:06.187825 | orchestrator |  { 2025-07-06 19:59:06.188392 | orchestrator |  "data": "osd-block-8c6cf71a-fa39-576b-8a24-237c163534df", 2025-07-06 19:59:06.189047 | orchestrator |  "data_vg": "ceph-8c6cf71a-fa39-576b-8a24-237c163534df" 2025-07-06 19:59:06.189197 | orchestrator |  } 2025-07-06 19:59:06.189655 | orchestrator |  ] 2025-07-06 19:59:06.190116 | orchestrator |  } 2025-07-06 19:59:06.190598 | orchestrator | } 2025-07-06 19:59:06.191037 | orchestrator | 2025-07-06 19:59:06.191409 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-06 19:59:06.191821 | orchestrator | Sunday 06 July 2025 19:59:06 +0000 (0:00:00.222) 0:00:42.060 *********** 2025-07-06 19:59:07.141018 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-06 19:59:07.141231 | orchestrator | 2025-07-06 19:59:07.142314 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:59:07.142794 | orchestrator | 2025-07-06 19:59:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 19:59:07.143156 | orchestrator | 2025-07-06 19:59:07 | INFO  | Please wait and do not abort execution. 2025-07-06 19:59:07.144706 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 19:59:07.145446 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 19:59:07.146694 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 19:59:07.147627 | orchestrator | 2025-07-06 19:59:07.148964 | orchestrator | 2025-07-06 19:59:07.149609 | orchestrator | 2025-07-06 19:59:07.150825 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:59:07.151494 | orchestrator | Sunday 06 July 2025 19:59:07 +0000 (0:00:00.964) 0:00:43.024 *********** 2025-07-06 19:59:07.152559 | orchestrator | =============================================================================== 2025-07-06 19:59:07.154246 | orchestrator | Write configuration file ------------------------------------------------ 4.03s 2025-07-06 19:59:07.154831 | orchestrator | Get initial list of available block devices ----------------------------- 1.26s 2025-07-06 19:59:07.155499 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-07-06 19:59:07.156766 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-07-06 19:59:07.157389 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.06s 2025-07-06 19:59:07.158525 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-07-06 19:59:07.160099 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-07-06 19:59:07.161601 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-07-06 19:59:07.162424 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-07-06 19:59:07.163252 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-07-06 19:59:07.164275 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2025-07-06 19:59:07.165211 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.75s 2025-07-06 19:59:07.166424 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-07-06 19:59:07.167405 | orchestrator | Set WAL devices config data --------------------------------------------- 0.72s 2025-07-06 19:59:07.168137 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-07-06 19:59:07.168822 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.67s 2025-07-06 19:59:07.169389 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-07-06 19:59:07.170418 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-07-06 19:59:07.171170 | orchestrator | Print DB devices -------------------------------------------------------- 0.60s 2025-07-06 19:59:07.172249 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-07-06 19:59:19.670230 | orchestrator | Registering Redlock._acquired_script 2025-07-06 19:59:19.670332 | orchestrator | Registering Redlock._extend_script 2025-07-06 19:59:19.670348 | orchestrator | Registering Redlock._release_script 2025-07-06 19:59:19.726739 | orchestrator | 2025-07-06 19:59:19 | INFO  | Task e9f63d78-7911-4822-a7e0-5eef3a21901d (sync inventory) is running in background. Output coming soon. 2025-07-06 19:59:37.659482 | orchestrator | 2025-07-06 19:59:20 | INFO  | Starting group_vars file reorganization 2025-07-06 19:59:37.659573 | orchestrator | 2025-07-06 19:59:20 | INFO  | Moved 0 file(s) to their respective directories 2025-07-06 19:59:37.659584 | orchestrator | 2025-07-06 19:59:20 | INFO  | Group_vars file reorganization completed 2025-07-06 19:59:37.659592 | orchestrator | 2025-07-06 19:59:23 | INFO  | Starting variable preparation from inventory 2025-07-06 19:59:37.659599 | orchestrator | 2025-07-06 19:59:24 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-06 19:59:37.659607 | orchestrator | 2025-07-06 19:59:24 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-06 19:59:37.659631 | orchestrator | 2025-07-06 19:59:24 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-06 19:59:37.659639 | orchestrator | 2025-07-06 19:59:24 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-06 19:59:37.659645 | orchestrator | 2025-07-06 19:59:24 | INFO  | Variable preparation completed: 2025-07-06 19:59:37.659651 | orchestrator | 2025-07-06 19:59:25 | INFO  | Starting inventory overwrite handling 2025-07-06 19:59:37.659657 | orchestrator | 2025-07-06 19:59:25 | INFO  | Handling group overwrites in 99-overwrite 2025-07-06 19:59:37.659663 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removing group frr:children from 60-generic 2025-07-06 19:59:37.659669 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removing group storage:children from 50-kolla 2025-07-06 19:59:37.659676 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-06 19:59:37.659689 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-06 19:59:37.659696 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-06 19:59:37.659702 | orchestrator | 2025-07-06 19:59:25 | INFO  | Handling group overwrites in 20-roles 2025-07-06 19:59:37.659708 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-06 19:59:37.659714 | orchestrator | 2025-07-06 19:59:25 | INFO  | Removed 6 group(s) in total 2025-07-06 19:59:37.659720 | orchestrator | 2025-07-06 19:59:25 | INFO  | Inventory overwrite handling completed 2025-07-06 19:59:37.659726 | orchestrator | 2025-07-06 19:59:26 | INFO  | Starting merge of inventory files 2025-07-06 19:59:37.659766 | orchestrator | 2025-07-06 19:59:26 | INFO  | Inventory files merged successfully 2025-07-06 19:59:37.659772 | orchestrator | 2025-07-06 19:59:29 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-06 19:59:37.659779 | orchestrator | 2025-07-06 19:59:36 | INFO  | Successfully wrote ClusterShell configuration 2025-07-06 19:59:37.659786 | orchestrator | [master 6cc192c] 2025-07-06-19-59 2025-07-06 19:59:37.659793 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-06 19:59:39.571110 | orchestrator | 2025-07-06 19:59:39 | INFO  | Task eb0538a1-3842-4f4c-9b8d-d281230e6056 (ceph-create-lvm-devices) was prepared for execution. 2025-07-06 19:59:39.571225 | orchestrator | 2025-07-06 19:59:39 | INFO  | It takes a moment until task eb0538a1-3842-4f4c-9b8d-d281230e6056 (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-06 19:59:43.665813 | orchestrator | 2025-07-06 19:59:43.666161 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-06 19:59:43.667461 | orchestrator | 2025-07-06 19:59:43.670082 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 19:59:43.670675 | orchestrator | Sunday 06 July 2025 19:59:43 +0000 (0:00:00.306) 0:00:00.306 *********** 2025-07-06 19:59:43.911835 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 19:59:43.911994 | orchestrator | 2025-07-06 19:59:43.912913 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 19:59:43.914542 | orchestrator | Sunday 06 July 2025 19:59:43 +0000 (0:00:00.248) 0:00:00.554 *********** 2025-07-06 19:59:44.130163 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:59:44.130782 | orchestrator | 2025-07-06 19:59:44.132220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:44.133236 | orchestrator | Sunday 06 July 2025 19:59:44 +0000 (0:00:00.218) 0:00:00.772 *********** 2025-07-06 19:59:44.521795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-06 19:59:44.522829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-06 19:59:44.523821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-06 19:59:44.525467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-06 19:59:44.526525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-06 19:59:44.528317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-06 19:59:44.528764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-06 19:59:44.529869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-06 19:59:44.530472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-06 19:59:44.531292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-06 19:59:44.532249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-06 19:59:44.532857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-06 19:59:44.533576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-06 19:59:44.534261 | orchestrator | 2025-07-06 19:59:44.534949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:44.535599 | orchestrator | Sunday 06 July 2025 19:59:44 +0000 (0:00:00.391) 0:00:01.164 *********** 2025-07-06 19:59:44.963739 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:44.964909 | orchestrator | 2025-07-06 19:59:44.966738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:44.967010 | orchestrator | Sunday 06 July 2025 19:59:44 +0000 (0:00:00.441) 0:00:01.606 *********** 2025-07-06 19:59:45.155920 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:45.157381 | orchestrator | 2025-07-06 19:59:45.158872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:45.159667 | orchestrator | Sunday 06 July 2025 19:59:45 +0000 (0:00:00.193) 0:00:01.799 *********** 2025-07-06 19:59:45.346302 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:45.346949 | orchestrator | 2025-07-06 19:59:45.347963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:45.349080 | orchestrator | Sunday 06 July 2025 19:59:45 +0000 (0:00:00.190) 0:00:01.989 *********** 2025-07-06 19:59:45.532527 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:45.533271 | orchestrator | 2025-07-06 19:59:45.534320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:45.534781 | orchestrator | Sunday 06 July 2025 19:59:45 +0000 (0:00:00.186) 0:00:02.175 *********** 2025-07-06 19:59:45.723509 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:45.724167 | orchestrator | 2025-07-06 19:59:45.726171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:45.726812 | orchestrator | Sunday 06 July 2025 19:59:45 +0000 (0:00:00.189) 0:00:02.365 *********** 2025-07-06 19:59:45.913307 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:45.913408 | orchestrator | 2025-07-06 19:59:45.913895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:45.914155 | orchestrator | Sunday 06 July 2025 19:59:45 +0000 (0:00:00.192) 0:00:02.557 *********** 2025-07-06 19:59:46.103292 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:46.103486 | orchestrator | 2025-07-06 19:59:46.104085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:46.104532 | orchestrator | Sunday 06 July 2025 19:59:46 +0000 (0:00:00.187) 0:00:02.744 *********** 2025-07-06 19:59:46.308504 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:46.309309 | orchestrator | 2025-07-06 19:59:46.309798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:46.310420 | orchestrator | Sunday 06 July 2025 19:59:46 +0000 (0:00:00.205) 0:00:02.950 *********** 2025-07-06 19:59:46.708025 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b) 2025-07-06 19:59:46.708440 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b) 2025-07-06 19:59:46.709182 | orchestrator | 2025-07-06 19:59:46.709825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:46.712260 | orchestrator | Sunday 06 July 2025 19:59:46 +0000 (0:00:00.400) 0:00:03.351 *********** 2025-07-06 19:59:47.096335 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960) 2025-07-06 19:59:47.096489 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960) 2025-07-06 19:59:47.098477 | orchestrator | 2025-07-06 19:59:47.099218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:47.101466 | orchestrator | Sunday 06 July 2025 19:59:47 +0000 (0:00:00.387) 0:00:03.738 *********** 2025-07-06 19:59:47.709861 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d) 2025-07-06 19:59:47.710216 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d) 2025-07-06 19:59:47.710686 | orchestrator | 2025-07-06 19:59:47.711667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:47.713465 | orchestrator | Sunday 06 July 2025 19:59:47 +0000 (0:00:00.614) 0:00:04.352 *********** 2025-07-06 19:59:48.292227 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1) 2025-07-06 19:59:48.292765 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1) 2025-07-06 19:59:48.293850 | orchestrator | 2025-07-06 19:59:48.294586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 19:59:48.295297 | orchestrator | Sunday 06 July 2025 19:59:48 +0000 (0:00:00.582) 0:00:04.935 *********** 2025-07-06 19:59:48.980206 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 19:59:48.980932 | orchestrator | 2025-07-06 19:59:48.982351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:48.982540 | orchestrator | Sunday 06 July 2025 19:59:48 +0000 (0:00:00.686) 0:00:05.621 *********** 2025-07-06 19:59:49.399517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-06 19:59:49.399756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-06 19:59:49.402217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-06 19:59:49.402596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-06 19:59:49.404640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-06 19:59:49.406514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-06 19:59:49.407681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-06 19:59:49.409255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-06 19:59:49.410069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-06 19:59:49.410814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-06 19:59:49.411538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-06 19:59:49.412368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-06 19:59:49.413007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-06 19:59:49.413788 | orchestrator | 2025-07-06 19:59:49.414554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:49.415375 | orchestrator | Sunday 06 July 2025 19:59:49 +0000 (0:00:00.419) 0:00:06.041 *********** 2025-07-06 19:59:49.588974 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:49.589082 | orchestrator | 2025-07-06 19:59:49.589215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:49.590347 | orchestrator | Sunday 06 July 2025 19:59:49 +0000 (0:00:00.189) 0:00:06.230 *********** 2025-07-06 19:59:49.781526 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:49.782069 | orchestrator | 2025-07-06 19:59:49.782792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:49.783505 | orchestrator | Sunday 06 July 2025 19:59:49 +0000 (0:00:00.193) 0:00:06.424 *********** 2025-07-06 19:59:49.981639 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:49.982481 | orchestrator | 2025-07-06 19:59:49.982974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:49.984015 | orchestrator | Sunday 06 July 2025 19:59:49 +0000 (0:00:00.200) 0:00:06.624 *********** 2025-07-06 19:59:50.169401 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:50.169632 | orchestrator | 2025-07-06 19:59:50.171015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:50.171573 | orchestrator | Sunday 06 July 2025 19:59:50 +0000 (0:00:00.186) 0:00:06.811 *********** 2025-07-06 19:59:50.371076 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:50.372145 | orchestrator | 2025-07-06 19:59:50.373137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:50.373815 | orchestrator | Sunday 06 July 2025 19:59:50 +0000 (0:00:00.202) 0:00:07.014 *********** 2025-07-06 19:59:50.566812 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:50.567268 | orchestrator | 2025-07-06 19:59:50.568912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:50.569339 | orchestrator | Sunday 06 July 2025 19:59:50 +0000 (0:00:00.195) 0:00:07.210 *********** 2025-07-06 19:59:50.747019 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:50.747157 | orchestrator | 2025-07-06 19:59:50.747887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:50.748422 | orchestrator | Sunday 06 July 2025 19:59:50 +0000 (0:00:00.180) 0:00:07.390 *********** 2025-07-06 19:59:50.951459 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:50.951812 | orchestrator | 2025-07-06 19:59:50.952527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:50.953966 | orchestrator | Sunday 06 July 2025 19:59:50 +0000 (0:00:00.203) 0:00:07.593 *********** 2025-07-06 19:59:51.976330 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-06 19:59:51.976612 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-06 19:59:51.977384 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-06 19:59:51.978104 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-06 19:59:51.979837 | orchestrator | 2025-07-06 19:59:51.979870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:51.979885 | orchestrator | Sunday 06 July 2025 19:59:51 +0000 (0:00:01.025) 0:00:08.618 *********** 2025-07-06 19:59:52.168457 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:52.169036 | orchestrator | 2025-07-06 19:59:52.169490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:52.170076 | orchestrator | Sunday 06 July 2025 19:59:52 +0000 (0:00:00.193) 0:00:08.811 *********** 2025-07-06 19:59:52.367991 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:52.368878 | orchestrator | 2025-07-06 19:59:52.369514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:52.370436 | orchestrator | Sunday 06 July 2025 19:59:52 +0000 (0:00:00.199) 0:00:09.011 *********** 2025-07-06 19:59:52.556248 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:52.556452 | orchestrator | 2025-07-06 19:59:52.557177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 19:59:52.558194 | orchestrator | Sunday 06 July 2025 19:59:52 +0000 (0:00:00.188) 0:00:09.199 *********** 2025-07-06 19:59:52.745145 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:52.745999 | orchestrator | 2025-07-06 19:59:52.746582 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-06 19:59:52.747501 | orchestrator | Sunday 06 July 2025 19:59:52 +0000 (0:00:00.188) 0:00:09.388 *********** 2025-07-06 19:59:52.866895 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:52.867531 | orchestrator | 2025-07-06 19:59:52.867881 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-06 19:59:52.868399 | orchestrator | Sunday 06 July 2025 19:59:52 +0000 (0:00:00.121) 0:00:09.509 *********** 2025-07-06 19:59:53.070793 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5b3ebdad-89cb-5093-adb4-41e3a34848e3'}}) 2025-07-06 19:59:53.070929 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67620618-3322-5703-9264-076cb24f91fa'}}) 2025-07-06 19:59:53.070955 | orchestrator | 2025-07-06 19:59:53.071181 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-06 19:59:53.072205 | orchestrator | Sunday 06 July 2025 19:59:53 +0000 (0:00:00.203) 0:00:09.713 *********** 2025-07-06 19:59:55.060573 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'}) 2025-07-06 19:59:55.060824 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'}) 2025-07-06 19:59:55.061757 | orchestrator | 2025-07-06 19:59:55.062179 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-06 19:59:55.062767 | orchestrator | Sunday 06 July 2025 19:59:55 +0000 (0:00:01.990) 0:00:11.703 *********** 2025-07-06 19:59:55.231589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:55.232239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:55.233602 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:55.235552 | orchestrator | 2025-07-06 19:59:55.236199 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-06 19:59:55.237183 | orchestrator | Sunday 06 July 2025 19:59:55 +0000 (0:00:00.170) 0:00:11.874 *********** 2025-07-06 19:59:56.649174 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'}) 2025-07-06 19:59:56.649917 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'}) 2025-07-06 19:59:56.652063 | orchestrator | 2025-07-06 19:59:56.653018 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-06 19:59:56.654160 | orchestrator | Sunday 06 July 2025 19:59:56 +0000 (0:00:01.416) 0:00:13.290 *********** 2025-07-06 19:59:56.785923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:56.786596 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:56.787933 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:56.789709 | orchestrator | 2025-07-06 19:59:56.791879 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-06 19:59:56.791915 | orchestrator | Sunday 06 July 2025 19:59:56 +0000 (0:00:00.138) 0:00:13.428 *********** 2025-07-06 19:59:56.928255 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:56.928778 | orchestrator | 2025-07-06 19:59:56.929399 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-06 19:59:56.929985 | orchestrator | Sunday 06 July 2025 19:59:56 +0000 (0:00:00.143) 0:00:13.572 *********** 2025-07-06 19:59:57.262276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:57.262555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:57.263327 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:57.264155 | orchestrator | 2025-07-06 19:59:57.264870 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-06 19:59:57.265637 | orchestrator | Sunday 06 July 2025 19:59:57 +0000 (0:00:00.331) 0:00:13.903 *********** 2025-07-06 19:59:57.397416 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:57.397846 | orchestrator | 2025-07-06 19:59:57.399535 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-06 19:59:57.400726 | orchestrator | Sunday 06 July 2025 19:59:57 +0000 (0:00:00.136) 0:00:14.040 *********** 2025-07-06 19:59:57.544316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:57.544927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:57.545985 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:57.547104 | orchestrator | 2025-07-06 19:59:57.547807 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-06 19:59:57.548925 | orchestrator | Sunday 06 July 2025 19:59:57 +0000 (0:00:00.146) 0:00:14.187 *********** 2025-07-06 19:59:57.684822 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:57.685636 | orchestrator | 2025-07-06 19:59:57.686493 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-06 19:59:57.687649 | orchestrator | Sunday 06 July 2025 19:59:57 +0000 (0:00:00.140) 0:00:14.328 *********** 2025-07-06 19:59:57.838782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:57.840414 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:57.842183 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:57.843658 | orchestrator | 2025-07-06 19:59:57.843870 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-06 19:59:57.844316 | orchestrator | Sunday 06 July 2025 19:59:57 +0000 (0:00:00.153) 0:00:14.481 *********** 2025-07-06 19:59:57.978849 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:59:57.979442 | orchestrator | 2025-07-06 19:59:57.981453 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-06 19:59:57.982456 | orchestrator | Sunday 06 July 2025 19:59:57 +0000 (0:00:00.137) 0:00:14.619 *********** 2025-07-06 19:59:58.130013 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:58.130182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:58.130739 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:58.131950 | orchestrator | 2025-07-06 19:59:58.132096 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-06 19:59:58.133009 | orchestrator | Sunday 06 July 2025 19:59:58 +0000 (0:00:00.151) 0:00:14.770 *********** 2025-07-06 19:59:58.274785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:58.274959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:58.276236 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:58.277375 | orchestrator | 2025-07-06 19:59:58.277782 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-06 19:59:58.278409 | orchestrator | Sunday 06 July 2025 19:59:58 +0000 (0:00:00.147) 0:00:14.918 *********** 2025-07-06 19:59:58.428355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 19:59:58.428574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 19:59:58.428596 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:58.429904 | orchestrator | 2025-07-06 19:59:58.430226 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-06 19:59:58.430351 | orchestrator | Sunday 06 July 2025 19:59:58 +0000 (0:00:00.153) 0:00:15.071 *********** 2025-07-06 19:59:58.563641 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:58.564476 | orchestrator | 2025-07-06 19:59:58.565234 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-06 19:59:58.565888 | orchestrator | Sunday 06 July 2025 19:59:58 +0000 (0:00:00.135) 0:00:15.206 *********** 2025-07-06 19:59:58.706992 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:58.707395 | orchestrator | 2025-07-06 19:59:58.707949 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-06 19:59:58.708428 | orchestrator | Sunday 06 July 2025 19:59:58 +0000 (0:00:00.143) 0:00:15.349 *********** 2025-07-06 19:59:58.845867 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:58.846089 | orchestrator | 2025-07-06 19:59:58.846761 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-06 19:59:58.846972 | orchestrator | Sunday 06 July 2025 19:59:58 +0000 (0:00:00.138) 0:00:15.488 *********** 2025-07-06 19:59:59.170573 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 19:59:59.171524 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-06 19:59:59.172810 | orchestrator | } 2025-07-06 19:59:59.173702 | orchestrator | 2025-07-06 19:59:59.174230 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-06 19:59:59.174784 | orchestrator | Sunday 06 July 2025 19:59:59 +0000 (0:00:00.323) 0:00:15.811 *********** 2025-07-06 19:59:59.318095 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 19:59:59.318469 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-06 19:59:59.319626 | orchestrator | } 2025-07-06 19:59:59.320993 | orchestrator | 2025-07-06 19:59:59.322283 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-06 19:59:59.323730 | orchestrator | Sunday 06 July 2025 19:59:59 +0000 (0:00:00.148) 0:00:15.960 *********** 2025-07-06 19:59:59.442821 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 19:59:59.443747 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-06 19:59:59.445593 | orchestrator | } 2025-07-06 19:59:59.446215 | orchestrator | 2025-07-06 19:59:59.447955 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-06 19:59:59.450403 | orchestrator | Sunday 06 July 2025 19:59:59 +0000 (0:00:00.124) 0:00:16.085 *********** 2025-07-06 20:00:00.083591 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:00.083828 | orchestrator | 2025-07-06 20:00:00.084775 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-06 20:00:00.086258 | orchestrator | Sunday 06 July 2025 20:00:00 +0000 (0:00:00.640) 0:00:16.725 *********** 2025-07-06 20:00:00.632636 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:00.633413 | orchestrator | 2025-07-06 20:00:00.634378 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-06 20:00:00.635136 | orchestrator | Sunday 06 July 2025 20:00:00 +0000 (0:00:00.549) 0:00:17.275 *********** 2025-07-06 20:00:01.151985 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:01.152083 | orchestrator | 2025-07-06 20:00:01.152755 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-06 20:00:01.152851 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.518) 0:00:17.793 *********** 2025-07-06 20:00:01.296489 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:01.296812 | orchestrator | 2025-07-06 20:00:01.297519 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-06 20:00:01.297942 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.146) 0:00:17.940 *********** 2025-07-06 20:00:01.418272 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:01.418855 | orchestrator | 2025-07-06 20:00:01.420268 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-06 20:00:01.421261 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.121) 0:00:18.061 *********** 2025-07-06 20:00:01.538202 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:01.538530 | orchestrator | 2025-07-06 20:00:01.539248 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-06 20:00:01.540029 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.119) 0:00:18.181 *********** 2025-07-06 20:00:01.669397 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:00:01.670112 | orchestrator |  "vgs_report": { 2025-07-06 20:00:01.670874 | orchestrator |  "vg": [] 2025-07-06 20:00:01.671758 | orchestrator |  } 2025-07-06 20:00:01.672270 | orchestrator | } 2025-07-06 20:00:01.673193 | orchestrator | 2025-07-06 20:00:01.673688 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-06 20:00:01.674069 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.131) 0:00:18.312 *********** 2025-07-06 20:00:01.803974 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:01.804271 | orchestrator | 2025-07-06 20:00:01.805245 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-06 20:00:01.806161 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.134) 0:00:18.446 *********** 2025-07-06 20:00:01.925024 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:01.925845 | orchestrator | 2025-07-06 20:00:01.926731 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-06 20:00:01.927466 | orchestrator | Sunday 06 July 2025 20:00:01 +0000 (0:00:00.121) 0:00:18.568 *********** 2025-07-06 20:00:02.268574 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:02.269023 | orchestrator | 2025-07-06 20:00:02.270168 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-06 20:00:02.271046 | orchestrator | Sunday 06 July 2025 20:00:02 +0000 (0:00:00.342) 0:00:18.910 *********** 2025-07-06 20:00:02.403718 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:02.404216 | orchestrator | 2025-07-06 20:00:02.405480 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-06 20:00:02.405928 | orchestrator | Sunday 06 July 2025 20:00:02 +0000 (0:00:00.133) 0:00:19.044 *********** 2025-07-06 20:00:02.539837 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:02.540339 | orchestrator | 2025-07-06 20:00:02.541132 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-06 20:00:02.542692 | orchestrator | Sunday 06 July 2025 20:00:02 +0000 (0:00:00.138) 0:00:19.183 *********** 2025-07-06 20:00:02.675880 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:02.676402 | orchestrator | 2025-07-06 20:00:02.677134 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-06 20:00:02.678067 | orchestrator | Sunday 06 July 2025 20:00:02 +0000 (0:00:00.135) 0:00:19.319 *********** 2025-07-06 20:00:02.813271 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:02.813480 | orchestrator | 2025-07-06 20:00:02.813869 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-06 20:00:02.814301 | orchestrator | Sunday 06 July 2025 20:00:02 +0000 (0:00:00.135) 0:00:19.454 *********** 2025-07-06 20:00:02.960944 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:02.961280 | orchestrator | 2025-07-06 20:00:02.962326 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-06 20:00:02.962821 | orchestrator | Sunday 06 July 2025 20:00:02 +0000 (0:00:00.149) 0:00:19.604 *********** 2025-07-06 20:00:03.097881 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.098442 | orchestrator | 2025-07-06 20:00:03.099179 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-06 20:00:03.099873 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.136) 0:00:19.740 *********** 2025-07-06 20:00:03.249221 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.249444 | orchestrator | 2025-07-06 20:00:03.249907 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-06 20:00:03.250289 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.150) 0:00:19.891 *********** 2025-07-06 20:00:03.397728 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.397833 | orchestrator | 2025-07-06 20:00:03.397929 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-06 20:00:03.398216 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.149) 0:00:20.041 *********** 2025-07-06 20:00:03.532711 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.534408 | orchestrator | 2025-07-06 20:00:03.536691 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-06 20:00:03.537888 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.131) 0:00:20.173 *********** 2025-07-06 20:00:03.662903 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.663123 | orchestrator | 2025-07-06 20:00:03.664347 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-06 20:00:03.665151 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.132) 0:00:20.305 *********** 2025-07-06 20:00:03.807115 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.807512 | orchestrator | 2025-07-06 20:00:03.808474 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-06 20:00:03.810105 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.143) 0:00:20.449 *********** 2025-07-06 20:00:03.957755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:03.957884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:03.957996 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:03.958353 | orchestrator | 2025-07-06 20:00:03.958759 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-06 20:00:03.959328 | orchestrator | Sunday 06 July 2025 20:00:03 +0000 (0:00:00.151) 0:00:20.601 *********** 2025-07-06 20:00:04.322992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:04.323096 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:04.323902 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:04.325237 | orchestrator | 2025-07-06 20:00:04.325840 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-06 20:00:04.326400 | orchestrator | Sunday 06 July 2025 20:00:04 +0000 (0:00:00.362) 0:00:20.963 *********** 2025-07-06 20:00:04.472448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:04.474449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:04.474884 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:04.476048 | orchestrator | 2025-07-06 20:00:04.477015 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-06 20:00:04.477543 | orchestrator | Sunday 06 July 2025 20:00:04 +0000 (0:00:00.151) 0:00:21.115 *********** 2025-07-06 20:00:04.632951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:04.633234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:04.633509 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:04.635374 | orchestrator | 2025-07-06 20:00:04.636968 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-06 20:00:04.637727 | orchestrator | Sunday 06 July 2025 20:00:04 +0000 (0:00:00.160) 0:00:21.275 *********** 2025-07-06 20:00:04.789337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:04.790533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:04.792457 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:04.793896 | orchestrator | 2025-07-06 20:00:04.794344 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-06 20:00:04.794861 | orchestrator | Sunday 06 July 2025 20:00:04 +0000 (0:00:00.155) 0:00:21.431 *********** 2025-07-06 20:00:04.940021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:04.940125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:04.941271 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:04.943210 | orchestrator | 2025-07-06 20:00:04.944314 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-06 20:00:04.945353 | orchestrator | Sunday 06 July 2025 20:00:04 +0000 (0:00:00.150) 0:00:21.581 *********** 2025-07-06 20:00:05.091908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:05.093401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:05.093785 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:05.094910 | orchestrator | 2025-07-06 20:00:05.096208 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-06 20:00:05.096752 | orchestrator | Sunday 06 July 2025 20:00:05 +0000 (0:00:00.153) 0:00:21.735 *********** 2025-07-06 20:00:05.243351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:05.243841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:05.244697 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:05.246172 | orchestrator | 2025-07-06 20:00:05.247177 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-06 20:00:05.248193 | orchestrator | Sunday 06 July 2025 20:00:05 +0000 (0:00:00.149) 0:00:21.885 *********** 2025-07-06 20:00:05.755354 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:05.756752 | orchestrator | 2025-07-06 20:00:05.756800 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-06 20:00:05.757490 | orchestrator | Sunday 06 July 2025 20:00:05 +0000 (0:00:00.513) 0:00:22.398 *********** 2025-07-06 20:00:06.269197 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:06.269309 | orchestrator | 2025-07-06 20:00:06.271505 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-06 20:00:06.271882 | orchestrator | Sunday 06 July 2025 20:00:06 +0000 (0:00:00.512) 0:00:22.910 *********** 2025-07-06 20:00:06.416722 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:00:06.416942 | orchestrator | 2025-07-06 20:00:06.417712 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-06 20:00:06.418560 | orchestrator | Sunday 06 July 2025 20:00:06 +0000 (0:00:00.148) 0:00:23.059 *********** 2025-07-06 20:00:06.582928 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'vg_name': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'}) 2025-07-06 20:00:06.583106 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'vg_name': 'ceph-67620618-3322-5703-9264-076cb24f91fa'}) 2025-07-06 20:00:06.584113 | orchestrator | 2025-07-06 20:00:06.585289 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-06 20:00:06.586335 | orchestrator | Sunday 06 July 2025 20:00:06 +0000 (0:00:00.166) 0:00:23.225 *********** 2025-07-06 20:00:06.728241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:06.728426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:06.729179 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:06.729681 | orchestrator | 2025-07-06 20:00:06.730373 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-06 20:00:06.730901 | orchestrator | Sunday 06 July 2025 20:00:06 +0000 (0:00:00.145) 0:00:23.371 *********** 2025-07-06 20:00:07.059241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:07.060065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:07.060584 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:07.066773 | orchestrator | 2025-07-06 20:00:07.066849 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-06 20:00:07.066866 | orchestrator | Sunday 06 July 2025 20:00:07 +0000 (0:00:00.331) 0:00:23.702 *********** 2025-07-06 20:00:07.215095 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'})  2025-07-06 20:00:07.215738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'})  2025-07-06 20:00:07.218381 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:00:07.218411 | orchestrator | 2025-07-06 20:00:07.219201 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-06 20:00:07.220152 | orchestrator | Sunday 06 July 2025 20:00:07 +0000 (0:00:00.154) 0:00:23.857 *********** 2025-07-06 20:00:07.497042 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:00:07.497438 | orchestrator |  "lvm_report": { 2025-07-06 20:00:07.498250 | orchestrator |  "lv": [ 2025-07-06 20:00:07.498952 | orchestrator |  { 2025-07-06 20:00:07.499669 | orchestrator |  "lv_name": "osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3", 2025-07-06 20:00:07.501646 | orchestrator |  "vg_name": "ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3" 2025-07-06 20:00:07.502094 | orchestrator |  }, 2025-07-06 20:00:07.503110 | orchestrator |  { 2025-07-06 20:00:07.503468 | orchestrator |  "lv_name": "osd-block-67620618-3322-5703-9264-076cb24f91fa", 2025-07-06 20:00:07.503968 | orchestrator |  "vg_name": "ceph-67620618-3322-5703-9264-076cb24f91fa" 2025-07-06 20:00:07.504500 | orchestrator |  } 2025-07-06 20:00:07.505347 | orchestrator |  ], 2025-07-06 20:00:07.506151 | orchestrator |  "pv": [ 2025-07-06 20:00:07.506359 | orchestrator |  { 2025-07-06 20:00:07.506699 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-06 20:00:07.507036 | orchestrator |  "vg_name": "ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3" 2025-07-06 20:00:07.507329 | orchestrator |  }, 2025-07-06 20:00:07.508617 | orchestrator |  { 2025-07-06 20:00:07.508777 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-06 20:00:07.509603 | orchestrator |  "vg_name": "ceph-67620618-3322-5703-9264-076cb24f91fa" 2025-07-06 20:00:07.509829 | orchestrator |  } 2025-07-06 20:00:07.510498 | orchestrator |  ] 2025-07-06 20:00:07.510853 | orchestrator |  } 2025-07-06 20:00:07.511382 | orchestrator | } 2025-07-06 20:00:07.511840 | orchestrator | 2025-07-06 20:00:07.512310 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-06 20:00:07.512693 | orchestrator | 2025-07-06 20:00:07.513272 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:00:07.513557 | orchestrator | Sunday 06 July 2025 20:00:07 +0000 (0:00:00.282) 0:00:24.140 *********** 2025-07-06 20:00:07.728763 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-06 20:00:07.729303 | orchestrator | 2025-07-06 20:00:07.730000 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:00:07.730789 | orchestrator | Sunday 06 July 2025 20:00:07 +0000 (0:00:00.229) 0:00:24.370 *********** 2025-07-06 20:00:07.952153 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:07.952620 | orchestrator | 2025-07-06 20:00:07.954117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:07.954934 | orchestrator | Sunday 06 July 2025 20:00:07 +0000 (0:00:00.225) 0:00:24.595 *********** 2025-07-06 20:00:08.348521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-06 20:00:08.348822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-06 20:00:08.349621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-06 20:00:08.350370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-06 20:00:08.352562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-06 20:00:08.353066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-06 20:00:08.353507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-06 20:00:08.354095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-06 20:00:08.354609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-06 20:00:08.355366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-06 20:00:08.356005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-06 20:00:08.356591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-06 20:00:08.359315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-06 20:00:08.359893 | orchestrator | 2025-07-06 20:00:08.360150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:08.360466 | orchestrator | Sunday 06 July 2025 20:00:08 +0000 (0:00:00.393) 0:00:24.988 *********** 2025-07-06 20:00:08.543871 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:08.544418 | orchestrator | 2025-07-06 20:00:08.545717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:08.546652 | orchestrator | Sunday 06 July 2025 20:00:08 +0000 (0:00:00.198) 0:00:25.187 *********** 2025-07-06 20:00:08.733762 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:08.734074 | orchestrator | 2025-07-06 20:00:08.734859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:08.735883 | orchestrator | Sunday 06 July 2025 20:00:08 +0000 (0:00:00.186) 0:00:25.374 *********** 2025-07-06 20:00:08.910729 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:08.911070 | orchestrator | 2025-07-06 20:00:08.911731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:08.913811 | orchestrator | Sunday 06 July 2025 20:00:08 +0000 (0:00:00.178) 0:00:25.552 *********** 2025-07-06 20:00:09.473870 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:09.474592 | orchestrator | 2025-07-06 20:00:09.475905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:09.476407 | orchestrator | Sunday 06 July 2025 20:00:09 +0000 (0:00:00.563) 0:00:26.115 *********** 2025-07-06 20:00:09.664422 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:09.665163 | orchestrator | 2025-07-06 20:00:09.665483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:09.666216 | orchestrator | Sunday 06 July 2025 20:00:09 +0000 (0:00:00.191) 0:00:26.307 *********** 2025-07-06 20:00:09.857774 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:09.857977 | orchestrator | 2025-07-06 20:00:09.858809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:09.859573 | orchestrator | Sunday 06 July 2025 20:00:09 +0000 (0:00:00.193) 0:00:26.500 *********** 2025-07-06 20:00:10.064732 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:10.065257 | orchestrator | 2025-07-06 20:00:10.065977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:10.066909 | orchestrator | Sunday 06 July 2025 20:00:10 +0000 (0:00:00.206) 0:00:26.707 *********** 2025-07-06 20:00:10.261924 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:10.262718 | orchestrator | 2025-07-06 20:00:10.263911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:10.264148 | orchestrator | Sunday 06 July 2025 20:00:10 +0000 (0:00:00.197) 0:00:26.905 *********** 2025-07-06 20:00:10.660376 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e) 2025-07-06 20:00:10.660574 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e) 2025-07-06 20:00:10.661843 | orchestrator | 2025-07-06 20:00:10.662705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:10.663749 | orchestrator | Sunday 06 July 2025 20:00:10 +0000 (0:00:00.398) 0:00:27.303 *********** 2025-07-06 20:00:11.077721 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c) 2025-07-06 20:00:11.077833 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c) 2025-07-06 20:00:11.077907 | orchestrator | 2025-07-06 20:00:11.079141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:11.079855 | orchestrator | Sunday 06 July 2025 20:00:11 +0000 (0:00:00.412) 0:00:27.715 *********** 2025-07-06 20:00:11.482496 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2) 2025-07-06 20:00:11.485383 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2) 2025-07-06 20:00:11.486579 | orchestrator | 2025-07-06 20:00:11.487755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:11.488598 | orchestrator | Sunday 06 July 2025 20:00:11 +0000 (0:00:00.407) 0:00:28.123 *********** 2025-07-06 20:00:11.900967 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155) 2025-07-06 20:00:11.901071 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155) 2025-07-06 20:00:11.901886 | orchestrator | 2025-07-06 20:00:11.902066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:11.902952 | orchestrator | Sunday 06 July 2025 20:00:11 +0000 (0:00:00.420) 0:00:28.544 *********** 2025-07-06 20:00:12.226762 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:00:12.226863 | orchestrator | 2025-07-06 20:00:12.227380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:12.228067 | orchestrator | Sunday 06 July 2025 20:00:12 +0000 (0:00:00.325) 0:00:28.870 *********** 2025-07-06 20:00:12.812343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-06 20:00:12.815045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-06 20:00:12.815183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-06 20:00:12.816344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-06 20:00:12.816828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-06 20:00:12.817963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-06 20:00:12.818760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-06 20:00:12.820169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-06 20:00:12.820879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-06 20:00:12.821509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-06 20:00:12.822350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-06 20:00:12.822913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-06 20:00:12.823787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-06 20:00:12.824247 | orchestrator | 2025-07-06 20:00:12.824707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:12.825820 | orchestrator | Sunday 06 July 2025 20:00:12 +0000 (0:00:00.583) 0:00:29.453 *********** 2025-07-06 20:00:13.004866 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:13.004962 | orchestrator | 2025-07-06 20:00:13.007630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:13.007930 | orchestrator | Sunday 06 July 2025 20:00:12 +0000 (0:00:00.194) 0:00:29.647 *********** 2025-07-06 20:00:13.196695 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:13.196807 | orchestrator | 2025-07-06 20:00:13.196824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:13.196934 | orchestrator | Sunday 06 July 2025 20:00:13 +0000 (0:00:00.192) 0:00:29.840 *********** 2025-07-06 20:00:13.393557 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:13.393701 | orchestrator | 2025-07-06 20:00:13.395234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:13.396252 | orchestrator | Sunday 06 July 2025 20:00:13 +0000 (0:00:00.195) 0:00:30.035 *********** 2025-07-06 20:00:13.592249 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:13.592476 | orchestrator | 2025-07-06 20:00:13.593374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:13.594126 | orchestrator | Sunday 06 July 2025 20:00:13 +0000 (0:00:00.199) 0:00:30.235 *********** 2025-07-06 20:00:13.812656 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:13.812776 | orchestrator | 2025-07-06 20:00:13.812841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:13.813294 | orchestrator | Sunday 06 July 2025 20:00:13 +0000 (0:00:00.219) 0:00:30.454 *********** 2025-07-06 20:00:14.008556 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:14.008678 | orchestrator | 2025-07-06 20:00:14.009056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:14.009534 | orchestrator | Sunday 06 July 2025 20:00:14 +0000 (0:00:00.197) 0:00:30.652 *********** 2025-07-06 20:00:14.204026 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:14.204164 | orchestrator | 2025-07-06 20:00:14.204190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:14.204211 | orchestrator | Sunday 06 July 2025 20:00:14 +0000 (0:00:00.192) 0:00:30.845 *********** 2025-07-06 20:00:14.406202 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:14.406380 | orchestrator | 2025-07-06 20:00:14.408782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:14.409894 | orchestrator | Sunday 06 July 2025 20:00:14 +0000 (0:00:00.203) 0:00:31.048 *********** 2025-07-06 20:00:15.220120 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-06 20:00:15.220239 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-06 20:00:15.220391 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-06 20:00:15.221157 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-06 20:00:15.222099 | orchestrator | 2025-07-06 20:00:15.223816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:15.223856 | orchestrator | Sunday 06 July 2025 20:00:15 +0000 (0:00:00.813) 0:00:31.862 *********** 2025-07-06 20:00:15.409376 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:15.409583 | orchestrator | 2025-07-06 20:00:15.410837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:15.411887 | orchestrator | Sunday 06 July 2025 20:00:15 +0000 (0:00:00.188) 0:00:32.050 *********** 2025-07-06 20:00:15.600133 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:15.600262 | orchestrator | 2025-07-06 20:00:15.601288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:15.602066 | orchestrator | Sunday 06 July 2025 20:00:15 +0000 (0:00:00.191) 0:00:32.242 *********** 2025-07-06 20:00:16.247253 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:16.247966 | orchestrator | 2025-07-06 20:00:16.250365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:16.250496 | orchestrator | Sunday 06 July 2025 20:00:16 +0000 (0:00:00.646) 0:00:32.889 *********** 2025-07-06 20:00:16.441190 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:16.441436 | orchestrator | 2025-07-06 20:00:16.442006 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-06 20:00:16.442760 | orchestrator | Sunday 06 July 2025 20:00:16 +0000 (0:00:00.193) 0:00:33.082 *********** 2025-07-06 20:00:16.570281 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:16.570394 | orchestrator | 2025-07-06 20:00:16.570447 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-06 20:00:16.570949 | orchestrator | Sunday 06 July 2025 20:00:16 +0000 (0:00:00.131) 0:00:33.214 *********** 2025-07-06 20:00:16.776363 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b2ac7c1-b26c-557b-8077-56c3cb59db23'}}) 2025-07-06 20:00:16.776782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}}) 2025-07-06 20:00:16.777881 | orchestrator | 2025-07-06 20:00:16.778492 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-06 20:00:16.779114 | orchestrator | Sunday 06 July 2025 20:00:16 +0000 (0:00:00.205) 0:00:33.420 *********** 2025-07-06 20:00:18.500127 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'}) 2025-07-06 20:00:18.500559 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}) 2025-07-06 20:00:18.502115 | orchestrator | 2025-07-06 20:00:18.503005 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-06 20:00:18.504881 | orchestrator | Sunday 06 July 2025 20:00:18 +0000 (0:00:01.721) 0:00:35.142 *********** 2025-07-06 20:00:18.649500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:18.650157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:18.651216 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:18.653283 | orchestrator | 2025-07-06 20:00:18.653312 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-06 20:00:18.654154 | orchestrator | Sunday 06 July 2025 20:00:18 +0000 (0:00:00.150) 0:00:35.292 *********** 2025-07-06 20:00:19.899642 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'}) 2025-07-06 20:00:19.900470 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}) 2025-07-06 20:00:19.901713 | orchestrator | 2025-07-06 20:00:19.902157 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-06 20:00:19.903648 | orchestrator | Sunday 06 July 2025 20:00:19 +0000 (0:00:01.248) 0:00:36.541 *********** 2025-07-06 20:00:20.047364 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:20.047464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:20.048471 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:20.050141 | orchestrator | 2025-07-06 20:00:20.051030 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-06 20:00:20.052159 | orchestrator | Sunday 06 July 2025 20:00:20 +0000 (0:00:00.148) 0:00:36.689 *********** 2025-07-06 20:00:20.171196 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:20.171792 | orchestrator | 2025-07-06 20:00:20.173394 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-06 20:00:20.174447 | orchestrator | Sunday 06 July 2025 20:00:20 +0000 (0:00:00.125) 0:00:36.815 *********** 2025-07-06 20:00:20.320420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:20.322078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:20.322683 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:20.323311 | orchestrator | 2025-07-06 20:00:20.324291 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-06 20:00:20.324747 | orchestrator | Sunday 06 July 2025 20:00:20 +0000 (0:00:00.145) 0:00:36.960 *********** 2025-07-06 20:00:20.463799 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:20.464022 | orchestrator | 2025-07-06 20:00:20.465494 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-06 20:00:20.466093 | orchestrator | Sunday 06 July 2025 20:00:20 +0000 (0:00:00.147) 0:00:37.107 *********** 2025-07-06 20:00:20.609746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:20.610979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:20.611763 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:20.612569 | orchestrator | 2025-07-06 20:00:20.613917 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-06 20:00:20.614485 | orchestrator | Sunday 06 July 2025 20:00:20 +0000 (0:00:00.144) 0:00:37.252 *********** 2025-07-06 20:00:20.940767 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:20.940977 | orchestrator | 2025-07-06 20:00:20.941936 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-06 20:00:20.942614 | orchestrator | Sunday 06 July 2025 20:00:20 +0000 (0:00:00.332) 0:00:37.584 *********** 2025-07-06 20:00:21.083431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:21.083824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:21.084636 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:21.085425 | orchestrator | 2025-07-06 20:00:21.087933 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-06 20:00:21.087991 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.142) 0:00:37.726 *********** 2025-07-06 20:00:21.228691 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:21.228793 | orchestrator | 2025-07-06 20:00:21.229655 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-06 20:00:21.230933 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.143) 0:00:37.870 *********** 2025-07-06 20:00:21.378167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:21.378285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:21.379151 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:21.380430 | orchestrator | 2025-07-06 20:00:21.382224 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-06 20:00:21.382391 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.150) 0:00:38.021 *********** 2025-07-06 20:00:21.526764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:21.527126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:21.527824 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:21.528499 | orchestrator | 2025-07-06 20:00:21.529435 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-06 20:00:21.531095 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.148) 0:00:38.170 *********** 2025-07-06 20:00:21.685889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:21.686211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:21.686612 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:21.687085 | orchestrator | 2025-07-06 20:00:21.687764 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-06 20:00:21.688164 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.157) 0:00:38.327 *********** 2025-07-06 20:00:21.806488 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:21.806952 | orchestrator | 2025-07-06 20:00:21.807535 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-06 20:00:21.808488 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.122) 0:00:38.450 *********** 2025-07-06 20:00:21.929253 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:21.930104 | orchestrator | 2025-07-06 20:00:21.930867 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-06 20:00:21.931718 | orchestrator | Sunday 06 July 2025 20:00:21 +0000 (0:00:00.122) 0:00:38.572 *********** 2025-07-06 20:00:22.079003 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:22.079323 | orchestrator | 2025-07-06 20:00:22.081553 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-06 20:00:22.082939 | orchestrator | Sunday 06 July 2025 20:00:22 +0000 (0:00:00.147) 0:00:38.720 *********** 2025-07-06 20:00:22.218334 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:00:22.219966 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-06 20:00:22.221255 | orchestrator | } 2025-07-06 20:00:22.222916 | orchestrator | 2025-07-06 20:00:22.222969 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-06 20:00:22.223572 | orchestrator | Sunday 06 July 2025 20:00:22 +0000 (0:00:00.141) 0:00:38.861 *********** 2025-07-06 20:00:22.362815 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:00:22.363355 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-06 20:00:22.364032 | orchestrator | } 2025-07-06 20:00:22.365280 | orchestrator | 2025-07-06 20:00:22.366226 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-06 20:00:22.366916 | orchestrator | Sunday 06 July 2025 20:00:22 +0000 (0:00:00.144) 0:00:39.005 *********** 2025-07-06 20:00:22.507028 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:00:22.508123 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-06 20:00:22.508391 | orchestrator | } 2025-07-06 20:00:22.509716 | orchestrator | 2025-07-06 20:00:22.511189 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-06 20:00:22.512324 | orchestrator | Sunday 06 July 2025 20:00:22 +0000 (0:00:00.143) 0:00:39.149 *********** 2025-07-06 20:00:23.213083 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:23.213265 | orchestrator | 2025-07-06 20:00:23.214377 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-06 20:00:23.215104 | orchestrator | Sunday 06 July 2025 20:00:23 +0000 (0:00:00.705) 0:00:39.854 *********** 2025-07-06 20:00:23.720792 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:23.722195 | orchestrator | 2025-07-06 20:00:23.723308 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-06 20:00:23.723976 | orchestrator | Sunday 06 July 2025 20:00:23 +0000 (0:00:00.507) 0:00:40.362 *********** 2025-07-06 20:00:24.228325 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:24.228792 | orchestrator | 2025-07-06 20:00:24.230186 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-06 20:00:24.231237 | orchestrator | Sunday 06 July 2025 20:00:24 +0000 (0:00:00.508) 0:00:40.871 *********** 2025-07-06 20:00:24.370210 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:24.370430 | orchestrator | 2025-07-06 20:00:24.371426 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-06 20:00:24.371911 | orchestrator | Sunday 06 July 2025 20:00:24 +0000 (0:00:00.142) 0:00:41.013 *********** 2025-07-06 20:00:24.493411 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:24.494183 | orchestrator | 2025-07-06 20:00:24.494518 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-06 20:00:24.497268 | orchestrator | Sunday 06 July 2025 20:00:24 +0000 (0:00:00.122) 0:00:41.136 *********** 2025-07-06 20:00:24.607013 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:24.607418 | orchestrator | 2025-07-06 20:00:24.608420 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-06 20:00:24.608953 | orchestrator | Sunday 06 July 2025 20:00:24 +0000 (0:00:00.113) 0:00:41.250 *********** 2025-07-06 20:00:24.752504 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:00:24.753358 | orchestrator |  "vgs_report": { 2025-07-06 20:00:24.754171 | orchestrator |  "vg": [] 2025-07-06 20:00:24.755609 | orchestrator |  } 2025-07-06 20:00:24.756100 | orchestrator | } 2025-07-06 20:00:24.757298 | orchestrator | 2025-07-06 20:00:24.757957 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-06 20:00:24.758394 | orchestrator | Sunday 06 July 2025 20:00:24 +0000 (0:00:00.144) 0:00:41.395 *********** 2025-07-06 20:00:24.879106 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:24.879729 | orchestrator | 2025-07-06 20:00:24.880652 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-06 20:00:24.881039 | orchestrator | Sunday 06 July 2025 20:00:24 +0000 (0:00:00.127) 0:00:41.522 *********** 2025-07-06 20:00:25.013238 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:25.014066 | orchestrator | 2025-07-06 20:00:25.014833 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-06 20:00:25.015920 | orchestrator | Sunday 06 July 2025 20:00:25 +0000 (0:00:00.134) 0:00:41.656 *********** 2025-07-06 20:00:25.146906 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:25.147780 | orchestrator | 2025-07-06 20:00:25.148223 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-06 20:00:25.149406 | orchestrator | Sunday 06 July 2025 20:00:25 +0000 (0:00:00.133) 0:00:41.790 *********** 2025-07-06 20:00:25.302430 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:25.302535 | orchestrator | 2025-07-06 20:00:25.302866 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-06 20:00:25.303405 | orchestrator | Sunday 06 July 2025 20:00:25 +0000 (0:00:00.154) 0:00:41.945 *********** 2025-07-06 20:00:25.433014 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:25.433347 | orchestrator | 2025-07-06 20:00:25.434215 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-06 20:00:25.435634 | orchestrator | Sunday 06 July 2025 20:00:25 +0000 (0:00:00.129) 0:00:42.074 *********** 2025-07-06 20:00:25.782795 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:25.782893 | orchestrator | 2025-07-06 20:00:25.782966 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-06 20:00:25.784792 | orchestrator | Sunday 06 July 2025 20:00:25 +0000 (0:00:00.347) 0:00:42.422 *********** 2025-07-06 20:00:25.926488 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:25.928927 | orchestrator | 2025-07-06 20:00:25.929792 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-06 20:00:25.932364 | orchestrator | Sunday 06 July 2025 20:00:25 +0000 (0:00:00.142) 0:00:42.564 *********** 2025-07-06 20:00:26.048002 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.048198 | orchestrator | 2025-07-06 20:00:26.048218 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-06 20:00:26.048664 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.127) 0:00:42.692 *********** 2025-07-06 20:00:26.177170 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.177272 | orchestrator | 2025-07-06 20:00:26.177564 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-06 20:00:26.178314 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.128) 0:00:42.820 *********** 2025-07-06 20:00:26.315507 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.315766 | orchestrator | 2025-07-06 20:00:26.316743 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-06 20:00:26.317866 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.137) 0:00:42.958 *********** 2025-07-06 20:00:26.449033 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.449632 | orchestrator | 2025-07-06 20:00:26.450259 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-06 20:00:26.450600 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.134) 0:00:43.093 *********** 2025-07-06 20:00:26.590359 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.590466 | orchestrator | 2025-07-06 20:00:26.590483 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-06 20:00:26.590636 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.140) 0:00:43.233 *********** 2025-07-06 20:00:26.723948 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.724055 | orchestrator | 2025-07-06 20:00:26.724677 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-06 20:00:26.725491 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.131) 0:00:43.365 *********** 2025-07-06 20:00:26.857259 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:26.857832 | orchestrator | 2025-07-06 20:00:26.858854 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-06 20:00:26.859466 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.134) 0:00:43.500 *********** 2025-07-06 20:00:27.006358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:27.006623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:27.007340 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:27.008365 | orchestrator | 2025-07-06 20:00:27.010242 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-06 20:00:27.010269 | orchestrator | Sunday 06 July 2025 20:00:26 +0000 (0:00:00.149) 0:00:43.649 *********** 2025-07-06 20:00:27.203081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:27.203327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:27.203351 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:27.203364 | orchestrator | 2025-07-06 20:00:27.204287 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-06 20:00:27.204375 | orchestrator | Sunday 06 July 2025 20:00:27 +0000 (0:00:00.197) 0:00:43.846 *********** 2025-07-06 20:00:27.348528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:27.348803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:27.349693 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:27.350663 | orchestrator | 2025-07-06 20:00:27.351459 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-06 20:00:27.353486 | orchestrator | Sunday 06 July 2025 20:00:27 +0000 (0:00:00.145) 0:00:43.992 *********** 2025-07-06 20:00:27.691895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:27.692117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:27.693350 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:27.695502 | orchestrator | 2025-07-06 20:00:27.695530 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-06 20:00:27.696547 | orchestrator | Sunday 06 July 2025 20:00:27 +0000 (0:00:00.341) 0:00:44.334 *********** 2025-07-06 20:00:27.841635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:27.842648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:27.842972 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:27.843739 | orchestrator | 2025-07-06 20:00:27.844391 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-06 20:00:27.844873 | orchestrator | Sunday 06 July 2025 20:00:27 +0000 (0:00:00.150) 0:00:44.485 *********** 2025-07-06 20:00:27.990709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:27.990841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:27.991651 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:27.992093 | orchestrator | 2025-07-06 20:00:27.992682 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-06 20:00:27.993623 | orchestrator | Sunday 06 July 2025 20:00:27 +0000 (0:00:00.148) 0:00:44.633 *********** 2025-07-06 20:00:28.143351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:28.143707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:28.144448 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:28.145274 | orchestrator | 2025-07-06 20:00:28.145963 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-06 20:00:28.146491 | orchestrator | Sunday 06 July 2025 20:00:28 +0000 (0:00:00.153) 0:00:44.786 *********** 2025-07-06 20:00:28.276959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:28.277056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:28.277505 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:28.277949 | orchestrator | 2025-07-06 20:00:28.278295 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-06 20:00:28.278958 | orchestrator | Sunday 06 July 2025 20:00:28 +0000 (0:00:00.133) 0:00:44.920 *********** 2025-07-06 20:00:28.784723 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:28.785321 | orchestrator | 2025-07-06 20:00:28.786088 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-06 20:00:28.787657 | orchestrator | Sunday 06 July 2025 20:00:28 +0000 (0:00:00.505) 0:00:45.426 *********** 2025-07-06 20:00:29.280959 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:29.281717 | orchestrator | 2025-07-06 20:00:29.282824 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-06 20:00:29.284122 | orchestrator | Sunday 06 July 2025 20:00:29 +0000 (0:00:00.497) 0:00:45.924 *********** 2025-07-06 20:00:29.426262 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:00:29.426520 | orchestrator | 2025-07-06 20:00:29.427324 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-06 20:00:29.427839 | orchestrator | Sunday 06 July 2025 20:00:29 +0000 (0:00:00.145) 0:00:46.069 *********** 2025-07-06 20:00:29.592025 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'vg_name': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'}) 2025-07-06 20:00:29.592749 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'vg_name': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}) 2025-07-06 20:00:29.593185 | orchestrator | 2025-07-06 20:00:29.593746 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-06 20:00:29.594191 | orchestrator | Sunday 06 July 2025 20:00:29 +0000 (0:00:00.167) 0:00:46.236 *********** 2025-07-06 20:00:29.751223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:29.751506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:29.752747 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:29.753287 | orchestrator | 2025-07-06 20:00:29.754144 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-06 20:00:29.754756 | orchestrator | Sunday 06 July 2025 20:00:29 +0000 (0:00:00.154) 0:00:46.390 *********** 2025-07-06 20:00:29.893598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:29.893683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:29.894315 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:29.895205 | orchestrator | 2025-07-06 20:00:29.895824 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-06 20:00:29.896518 | orchestrator | Sunday 06 July 2025 20:00:29 +0000 (0:00:00.143) 0:00:46.534 *********** 2025-07-06 20:00:30.049326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'})  2025-07-06 20:00:30.049423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'})  2025-07-06 20:00:30.050190 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:00:30.050466 | orchestrator | 2025-07-06 20:00:30.051202 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-06 20:00:30.051866 | orchestrator | Sunday 06 July 2025 20:00:30 +0000 (0:00:00.157) 0:00:46.692 *********** 2025-07-06 20:00:30.512519 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:00:30.513527 | orchestrator |  "lvm_report": { 2025-07-06 20:00:30.513884 | orchestrator |  "lv": [ 2025-07-06 20:00:30.514543 | orchestrator |  { 2025-07-06 20:00:30.515574 | orchestrator |  "lv_name": "osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23", 2025-07-06 20:00:30.515910 | orchestrator |  "vg_name": "ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23" 2025-07-06 20:00:30.518380 | orchestrator |  }, 2025-07-06 20:00:30.518531 | orchestrator |  { 2025-07-06 20:00:30.518697 | orchestrator |  "lv_name": "osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d", 2025-07-06 20:00:30.519919 | orchestrator |  "vg_name": "ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d" 2025-07-06 20:00:30.519953 | orchestrator |  } 2025-07-06 20:00:30.519996 | orchestrator |  ], 2025-07-06 20:00:30.520051 | orchestrator |  "pv": [ 2025-07-06 20:00:30.520472 | orchestrator |  { 2025-07-06 20:00:30.521085 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-06 20:00:30.521387 | orchestrator |  "vg_name": "ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23" 2025-07-06 20:00:30.521768 | orchestrator |  }, 2025-07-06 20:00:30.522161 | orchestrator |  { 2025-07-06 20:00:30.522718 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-06 20:00:30.522947 | orchestrator |  "vg_name": "ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d" 2025-07-06 20:00:30.523312 | orchestrator |  } 2025-07-06 20:00:30.524023 | orchestrator |  ] 2025-07-06 20:00:30.525271 | orchestrator |  } 2025-07-06 20:00:30.525748 | orchestrator | } 2025-07-06 20:00:30.526720 | orchestrator | 2025-07-06 20:00:30.526953 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-06 20:00:30.527725 | orchestrator | 2025-07-06 20:00:30.528236 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:00:30.528424 | orchestrator | Sunday 06 July 2025 20:00:30 +0000 (0:00:00.463) 0:00:47.156 *********** 2025-07-06 20:00:30.740129 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-06 20:00:30.740969 | orchestrator | 2025-07-06 20:00:30.741619 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:00:30.742415 | orchestrator | Sunday 06 July 2025 20:00:30 +0000 (0:00:00.226) 0:00:47.382 *********** 2025-07-06 20:00:30.957051 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:30.957129 | orchestrator | 2025-07-06 20:00:30.957155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:30.957207 | orchestrator | Sunday 06 July 2025 20:00:30 +0000 (0:00:00.217) 0:00:47.600 *********** 2025-07-06 20:00:31.354714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-06 20:00:31.355787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-06 20:00:31.357041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-06 20:00:31.358731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-06 20:00:31.358800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-06 20:00:31.359272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-06 20:00:31.359682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-06 20:00:31.360205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-06 20:00:31.360708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-06 20:00:31.361117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-06 20:00:31.361739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-06 20:00:31.362148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-06 20:00:31.362781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-06 20:00:31.362876 | orchestrator | 2025-07-06 20:00:31.363339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:31.363737 | orchestrator | Sunday 06 July 2025 20:00:31 +0000 (0:00:00.397) 0:00:47.998 *********** 2025-07-06 20:00:31.551280 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:31.551484 | orchestrator | 2025-07-06 20:00:31.552172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:31.552663 | orchestrator | Sunday 06 July 2025 20:00:31 +0000 (0:00:00.196) 0:00:48.195 *********** 2025-07-06 20:00:31.749096 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:31.749286 | orchestrator | 2025-07-06 20:00:31.749775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:31.752731 | orchestrator | Sunday 06 July 2025 20:00:31 +0000 (0:00:00.197) 0:00:48.392 *********** 2025-07-06 20:00:31.953081 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:31.953758 | orchestrator | 2025-07-06 20:00:31.957342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:31.958006 | orchestrator | Sunday 06 July 2025 20:00:31 +0000 (0:00:00.202) 0:00:48.595 *********** 2025-07-06 20:00:32.159894 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:32.161024 | orchestrator | 2025-07-06 20:00:32.161838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:32.162611 | orchestrator | Sunday 06 July 2025 20:00:32 +0000 (0:00:00.207) 0:00:48.803 *********** 2025-07-06 20:00:32.355345 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:32.355990 | orchestrator | 2025-07-06 20:00:32.356911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:32.358158 | orchestrator | Sunday 06 July 2025 20:00:32 +0000 (0:00:00.195) 0:00:48.998 *********** 2025-07-06 20:00:32.923122 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:32.923281 | orchestrator | 2025-07-06 20:00:32.923815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:32.924682 | orchestrator | Sunday 06 July 2025 20:00:32 +0000 (0:00:00.566) 0:00:49.565 *********** 2025-07-06 20:00:33.120452 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:33.120784 | orchestrator | 2025-07-06 20:00:33.121189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:33.121845 | orchestrator | Sunday 06 July 2025 20:00:33 +0000 (0:00:00.199) 0:00:49.764 *********** 2025-07-06 20:00:33.312678 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:33.313230 | orchestrator | 2025-07-06 20:00:33.314485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:33.315000 | orchestrator | Sunday 06 July 2025 20:00:33 +0000 (0:00:00.191) 0:00:49.955 *********** 2025-07-06 20:00:33.704764 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280) 2025-07-06 20:00:33.704934 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280) 2025-07-06 20:00:33.705649 | orchestrator | 2025-07-06 20:00:33.706141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:33.706870 | orchestrator | Sunday 06 July 2025 20:00:33 +0000 (0:00:00.392) 0:00:50.348 *********** 2025-07-06 20:00:34.118716 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7) 2025-07-06 20:00:34.120097 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7) 2025-07-06 20:00:34.121209 | orchestrator | 2025-07-06 20:00:34.121721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:34.122314 | orchestrator | Sunday 06 July 2025 20:00:34 +0000 (0:00:00.411) 0:00:50.759 *********** 2025-07-06 20:00:34.527978 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b) 2025-07-06 20:00:34.528396 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b) 2025-07-06 20:00:34.529228 | orchestrator | 2025-07-06 20:00:34.530200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:34.531072 | orchestrator | Sunday 06 July 2025 20:00:34 +0000 (0:00:00.411) 0:00:51.171 *********** 2025-07-06 20:00:34.946740 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c) 2025-07-06 20:00:34.946896 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c) 2025-07-06 20:00:34.947666 | orchestrator | 2025-07-06 20:00:34.948514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:00:34.949250 | orchestrator | Sunday 06 July 2025 20:00:34 +0000 (0:00:00.417) 0:00:51.588 *********** 2025-07-06 20:00:35.275357 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:00:35.275889 | orchestrator | 2025-07-06 20:00:35.277079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:35.278106 | orchestrator | Sunday 06 July 2025 20:00:35 +0000 (0:00:00.329) 0:00:51.918 *********** 2025-07-06 20:00:35.695000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-06 20:00:35.695596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-06 20:00:35.696037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-06 20:00:35.697638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-06 20:00:35.701590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-06 20:00:35.703285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-06 20:00:35.704261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-06 20:00:35.705353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-06 20:00:35.706279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-06 20:00:35.707124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-06 20:00:35.707966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-06 20:00:35.708770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-06 20:00:35.709369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-06 20:00:35.710201 | orchestrator | 2025-07-06 20:00:35.710916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:35.711482 | orchestrator | Sunday 06 July 2025 20:00:35 +0000 (0:00:00.420) 0:00:52.338 *********** 2025-07-06 20:00:35.892619 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:35.892823 | orchestrator | 2025-07-06 20:00:35.893617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:35.894174 | orchestrator | Sunday 06 July 2025 20:00:35 +0000 (0:00:00.195) 0:00:52.534 *********** 2025-07-06 20:00:36.102466 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:36.102588 | orchestrator | 2025-07-06 20:00:36.103279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:36.103922 | orchestrator | Sunday 06 July 2025 20:00:36 +0000 (0:00:00.211) 0:00:52.745 *********** 2025-07-06 20:00:36.689690 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:36.689907 | orchestrator | 2025-07-06 20:00:36.690688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:36.691692 | orchestrator | Sunday 06 July 2025 20:00:36 +0000 (0:00:00.586) 0:00:53.332 *********** 2025-07-06 20:00:36.899508 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:36.899993 | orchestrator | 2025-07-06 20:00:36.900417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:36.900716 | orchestrator | Sunday 06 July 2025 20:00:36 +0000 (0:00:00.208) 0:00:53.541 *********** 2025-07-06 20:00:37.091170 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:37.091342 | orchestrator | 2025-07-06 20:00:37.092592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:37.093433 | orchestrator | Sunday 06 July 2025 20:00:37 +0000 (0:00:00.192) 0:00:53.734 *********** 2025-07-06 20:00:37.292986 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:37.294119 | orchestrator | 2025-07-06 20:00:37.295171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:37.296912 | orchestrator | Sunday 06 July 2025 20:00:37 +0000 (0:00:00.202) 0:00:53.936 *********** 2025-07-06 20:00:37.491110 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:37.491660 | orchestrator | 2025-07-06 20:00:37.492924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:37.494397 | orchestrator | Sunday 06 July 2025 20:00:37 +0000 (0:00:00.197) 0:00:54.133 *********** 2025-07-06 20:00:37.678434 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:37.678661 | orchestrator | 2025-07-06 20:00:37.679267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:37.679847 | orchestrator | Sunday 06 July 2025 20:00:37 +0000 (0:00:00.188) 0:00:54.322 *********** 2025-07-06 20:00:38.311084 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-06 20:00:38.311764 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-06 20:00:38.313389 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-06 20:00:38.314350 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-06 20:00:38.314739 | orchestrator | 2025-07-06 20:00:38.315629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:38.316047 | orchestrator | Sunday 06 July 2025 20:00:38 +0000 (0:00:00.630) 0:00:54.952 *********** 2025-07-06 20:00:38.515829 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:38.517047 | orchestrator | 2025-07-06 20:00:38.518127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:38.518559 | orchestrator | Sunday 06 July 2025 20:00:38 +0000 (0:00:00.205) 0:00:55.158 *********** 2025-07-06 20:00:38.766254 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:38.766416 | orchestrator | 2025-07-06 20:00:38.769156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:38.769182 | orchestrator | Sunday 06 July 2025 20:00:38 +0000 (0:00:00.247) 0:00:55.405 *********** 2025-07-06 20:00:38.953773 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:38.953876 | orchestrator | 2025-07-06 20:00:38.953892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:00:38.953968 | orchestrator | Sunday 06 July 2025 20:00:38 +0000 (0:00:00.187) 0:00:55.593 *********** 2025-07-06 20:00:39.150118 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:39.150396 | orchestrator | 2025-07-06 20:00:39.150438 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-06 20:00:39.151592 | orchestrator | Sunday 06 July 2025 20:00:39 +0000 (0:00:00.200) 0:00:55.793 *********** 2025-07-06 20:00:39.478682 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:39.479337 | orchestrator | 2025-07-06 20:00:39.480309 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-06 20:00:39.482156 | orchestrator | Sunday 06 July 2025 20:00:39 +0000 (0:00:00.328) 0:00:56.121 *********** 2025-07-06 20:00:39.663178 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}}) 2025-07-06 20:00:39.663311 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c6cf71a-fa39-576b-8a24-237c163534df'}}) 2025-07-06 20:00:39.663675 | orchestrator | 2025-07-06 20:00:39.663982 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-06 20:00:39.664662 | orchestrator | Sunday 06 July 2025 20:00:39 +0000 (0:00:00.185) 0:00:56.306 *********** 2025-07-06 20:00:41.438815 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}) 2025-07-06 20:00:41.439062 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'}) 2025-07-06 20:00:41.439830 | orchestrator | 2025-07-06 20:00:41.440932 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-06 20:00:41.442140 | orchestrator | Sunday 06 July 2025 20:00:41 +0000 (0:00:01.772) 0:00:58.079 *********** 2025-07-06 20:00:41.589817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:41.590571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:41.591743 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:41.592488 | orchestrator | 2025-07-06 20:00:41.593025 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-06 20:00:41.593546 | orchestrator | Sunday 06 July 2025 20:00:41 +0000 (0:00:00.154) 0:00:58.233 *********** 2025-07-06 20:00:42.895389 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}) 2025-07-06 20:00:42.895986 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'}) 2025-07-06 20:00:42.896868 | orchestrator | 2025-07-06 20:00:42.898299 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-06 20:00:42.898581 | orchestrator | Sunday 06 July 2025 20:00:42 +0000 (0:00:01.300) 0:00:59.534 *********** 2025-07-06 20:00:43.042133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:43.043034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:43.043970 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.045296 | orchestrator | 2025-07-06 20:00:43.046875 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-06 20:00:43.047442 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.151) 0:00:59.685 *********** 2025-07-06 20:00:43.172384 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.173411 | orchestrator | 2025-07-06 20:00:43.173925 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-06 20:00:43.174731 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.130) 0:00:59.815 *********** 2025-07-06 20:00:43.348845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:43.349790 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:43.350660 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.351144 | orchestrator | 2025-07-06 20:00:43.351958 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-06 20:00:43.353087 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.176) 0:00:59.992 *********** 2025-07-06 20:00:43.488974 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.490122 | orchestrator | 2025-07-06 20:00:43.490881 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-06 20:00:43.492422 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.138) 0:01:00.131 *********** 2025-07-06 20:00:43.639867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:43.641038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:43.641816 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.643040 | orchestrator | 2025-07-06 20:00:43.644048 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-06 20:00:43.645212 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.151) 0:01:00.282 *********** 2025-07-06 20:00:43.760929 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.761124 | orchestrator | 2025-07-06 20:00:43.761676 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-06 20:00:43.762452 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.122) 0:01:00.404 *********** 2025-07-06 20:00:43.897718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:43.898625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:43.900090 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:43.901589 | orchestrator | 2025-07-06 20:00:43.902980 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-06 20:00:43.905326 | orchestrator | Sunday 06 July 2025 20:00:43 +0000 (0:00:00.135) 0:01:00.540 *********** 2025-07-06 20:00:44.043596 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:44.044350 | orchestrator | 2025-07-06 20:00:44.045189 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-06 20:00:44.046355 | orchestrator | Sunday 06 July 2025 20:00:44 +0000 (0:00:00.146) 0:01:00.686 *********** 2025-07-06 20:00:44.379436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:44.381333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:44.382492 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:44.383209 | orchestrator | 2025-07-06 20:00:44.384272 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-06 20:00:44.384416 | orchestrator | Sunday 06 July 2025 20:00:44 +0000 (0:00:00.335) 0:01:01.022 *********** 2025-07-06 20:00:44.545882 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:44.546582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:44.547480 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:44.548190 | orchestrator | 2025-07-06 20:00:44.549273 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-06 20:00:44.550107 | orchestrator | Sunday 06 July 2025 20:00:44 +0000 (0:00:00.164) 0:01:01.187 *********** 2025-07-06 20:00:44.708450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:44.708599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:44.709479 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:44.709917 | orchestrator | 2025-07-06 20:00:44.710627 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-06 20:00:44.711259 | orchestrator | Sunday 06 July 2025 20:00:44 +0000 (0:00:00.164) 0:01:01.351 *********** 2025-07-06 20:00:44.838075 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:44.838352 | orchestrator | 2025-07-06 20:00:44.839122 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-06 20:00:44.839980 | orchestrator | Sunday 06 July 2025 20:00:44 +0000 (0:00:00.129) 0:01:01.481 *********** 2025-07-06 20:00:44.976497 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:44.976743 | orchestrator | 2025-07-06 20:00:44.977630 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-06 20:00:44.978095 | orchestrator | Sunday 06 July 2025 20:00:44 +0000 (0:00:00.138) 0:01:01.619 *********** 2025-07-06 20:00:45.125988 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:45.128127 | orchestrator | 2025-07-06 20:00:45.128167 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-06 20:00:45.132456 | orchestrator | Sunday 06 July 2025 20:00:45 +0000 (0:00:00.149) 0:01:01.769 *********** 2025-07-06 20:00:45.253478 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:00:45.253637 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-06 20:00:45.254742 | orchestrator | } 2025-07-06 20:00:45.255237 | orchestrator | 2025-07-06 20:00:45.256981 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-06 20:00:45.257071 | orchestrator | Sunday 06 July 2025 20:00:45 +0000 (0:00:00.126) 0:01:01.895 *********** 2025-07-06 20:00:45.395989 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:00:45.397404 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-06 20:00:45.398117 | orchestrator | } 2025-07-06 20:00:45.399038 | orchestrator | 2025-07-06 20:00:45.400463 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-06 20:00:45.400555 | orchestrator | Sunday 06 July 2025 20:00:45 +0000 (0:00:00.143) 0:01:02.039 *********** 2025-07-06 20:00:45.541876 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:00:45.542766 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-06 20:00:45.543953 | orchestrator | } 2025-07-06 20:00:45.544611 | orchestrator | 2025-07-06 20:00:45.545203 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-06 20:00:45.545981 | orchestrator | Sunday 06 July 2025 20:00:45 +0000 (0:00:00.143) 0:01:02.183 *********** 2025-07-06 20:00:46.035112 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:46.035789 | orchestrator | 2025-07-06 20:00:46.036270 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-06 20:00:46.037334 | orchestrator | Sunday 06 July 2025 20:00:46 +0000 (0:00:00.494) 0:01:02.678 *********** 2025-07-06 20:00:46.546799 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:46.547014 | orchestrator | 2025-07-06 20:00:46.547949 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-06 20:00:46.548754 | orchestrator | Sunday 06 July 2025 20:00:46 +0000 (0:00:00.510) 0:01:03.189 *********** 2025-07-06 20:00:47.028627 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:47.029894 | orchestrator | 2025-07-06 20:00:47.031735 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-06 20:00:47.032593 | orchestrator | Sunday 06 July 2025 20:00:47 +0000 (0:00:00.483) 0:01:03.672 *********** 2025-07-06 20:00:47.383721 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:47.384501 | orchestrator | 2025-07-06 20:00:47.386809 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-06 20:00:47.386866 | orchestrator | Sunday 06 July 2025 20:00:47 +0000 (0:00:00.353) 0:01:04.026 *********** 2025-07-06 20:00:47.488374 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:47.489702 | orchestrator | 2025-07-06 20:00:47.490630 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-06 20:00:47.491498 | orchestrator | Sunday 06 July 2025 20:00:47 +0000 (0:00:00.104) 0:01:04.130 *********** 2025-07-06 20:00:47.596104 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:47.597972 | orchestrator | 2025-07-06 20:00:47.598640 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-06 20:00:47.599291 | orchestrator | Sunday 06 July 2025 20:00:47 +0000 (0:00:00.107) 0:01:04.237 *********** 2025-07-06 20:00:47.736248 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:00:47.737327 | orchestrator |  "vgs_report": { 2025-07-06 20:00:47.737833 | orchestrator |  "vg": [] 2025-07-06 20:00:47.738778 | orchestrator |  } 2025-07-06 20:00:47.739361 | orchestrator | } 2025-07-06 20:00:47.740176 | orchestrator | 2025-07-06 20:00:47.740674 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-06 20:00:47.741432 | orchestrator | Sunday 06 July 2025 20:00:47 +0000 (0:00:00.141) 0:01:04.379 *********** 2025-07-06 20:00:47.874553 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:47.875158 | orchestrator | 2025-07-06 20:00:47.875919 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-06 20:00:47.876655 | orchestrator | Sunday 06 July 2025 20:00:47 +0000 (0:00:00.138) 0:01:04.518 *********** 2025-07-06 20:00:48.012393 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.013683 | orchestrator | 2025-07-06 20:00:48.014787 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-06 20:00:48.016572 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.137) 0:01:04.655 *********** 2025-07-06 20:00:48.151600 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.153067 | orchestrator | 2025-07-06 20:00:48.154245 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-06 20:00:48.154768 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.138) 0:01:04.794 *********** 2025-07-06 20:00:48.296201 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.297308 | orchestrator | 2025-07-06 20:00:48.298214 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-06 20:00:48.299630 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.145) 0:01:04.939 *********** 2025-07-06 20:00:48.434278 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.435483 | orchestrator | 2025-07-06 20:00:48.436023 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-06 20:00:48.437495 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.137) 0:01:05.077 *********** 2025-07-06 20:00:48.562602 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.563496 | orchestrator | 2025-07-06 20:00:48.564314 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-06 20:00:48.567113 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.128) 0:01:05.205 *********** 2025-07-06 20:00:48.692631 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.693066 | orchestrator | 2025-07-06 20:00:48.693446 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-06 20:00:48.694143 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.131) 0:01:05.336 *********** 2025-07-06 20:00:48.828003 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:48.828715 | orchestrator | 2025-07-06 20:00:48.829463 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-06 20:00:48.830877 | orchestrator | Sunday 06 July 2025 20:00:48 +0000 (0:00:00.134) 0:01:05.471 *********** 2025-07-06 20:00:49.170493 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:49.171036 | orchestrator | 2025-07-06 20:00:49.171571 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-06 20:00:49.173279 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.340) 0:01:05.812 *********** 2025-07-06 20:00:49.303275 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:49.303380 | orchestrator | 2025-07-06 20:00:49.303542 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-06 20:00:49.304427 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.134) 0:01:05.946 *********** 2025-07-06 20:00:49.437681 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:49.438519 | orchestrator | 2025-07-06 20:00:49.438968 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-06 20:00:49.440224 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.134) 0:01:06.081 *********** 2025-07-06 20:00:49.572803 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:49.573191 | orchestrator | 2025-07-06 20:00:49.574193 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-06 20:00:49.575210 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.134) 0:01:06.215 *********** 2025-07-06 20:00:49.710153 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:49.710287 | orchestrator | 2025-07-06 20:00:49.710399 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-06 20:00:49.710877 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.136) 0:01:06.352 *********** 2025-07-06 20:00:49.852998 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:49.853287 | orchestrator | 2025-07-06 20:00:49.853317 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-06 20:00:49.853619 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.143) 0:01:06.496 *********** 2025-07-06 20:00:50.004563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:50.004666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:50.004681 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:50.006094 | orchestrator | 2025-07-06 20:00:50.006234 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-06 20:00:50.007650 | orchestrator | Sunday 06 July 2025 20:00:49 +0000 (0:00:00.152) 0:01:06.648 *********** 2025-07-06 20:00:50.165718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:50.165999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:50.168116 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:50.168206 | orchestrator | 2025-07-06 20:00:50.168744 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-06 20:00:50.169474 | orchestrator | Sunday 06 July 2025 20:00:50 +0000 (0:00:00.160) 0:01:06.809 *********** 2025-07-06 20:00:50.317430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:50.318459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:50.320064 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:50.320928 | orchestrator | 2025-07-06 20:00:50.321665 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-06 20:00:50.322115 | orchestrator | Sunday 06 July 2025 20:00:50 +0000 (0:00:00.149) 0:01:06.958 *********** 2025-07-06 20:00:50.460452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:50.460640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:50.460658 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:50.460774 | orchestrator | 2025-07-06 20:00:50.461813 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-06 20:00:50.462238 | orchestrator | Sunday 06 July 2025 20:00:50 +0000 (0:00:00.141) 0:01:07.100 *********** 2025-07-06 20:00:50.608178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:50.609191 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:50.610839 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:50.612549 | orchestrator | 2025-07-06 20:00:50.613805 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-06 20:00:50.614881 | orchestrator | Sunday 06 July 2025 20:00:50 +0000 (0:00:00.150) 0:01:07.251 *********** 2025-07-06 20:00:50.769008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:50.769366 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:50.770583 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:50.771306 | orchestrator | 2025-07-06 20:00:50.772012 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-06 20:00:50.772590 | orchestrator | Sunday 06 July 2025 20:00:50 +0000 (0:00:00.160) 0:01:07.411 *********** 2025-07-06 20:00:51.132637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:51.132761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:51.132841 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:51.133265 | orchestrator | 2025-07-06 20:00:51.134510 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-06 20:00:51.137268 | orchestrator | Sunday 06 July 2025 20:00:51 +0000 (0:00:00.363) 0:01:07.775 *********** 2025-07-06 20:00:51.286405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:51.287392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:51.289072 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:51.289847 | orchestrator | 2025-07-06 20:00:51.290558 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-06 20:00:51.291719 | orchestrator | Sunday 06 July 2025 20:00:51 +0000 (0:00:00.154) 0:01:07.929 *********** 2025-07-06 20:00:51.812808 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:51.813451 | orchestrator | 2025-07-06 20:00:51.814451 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-06 20:00:51.815199 | orchestrator | Sunday 06 July 2025 20:00:51 +0000 (0:00:00.522) 0:01:08.452 *********** 2025-07-06 20:00:52.311563 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:52.311735 | orchestrator | 2025-07-06 20:00:52.312052 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-06 20:00:52.312707 | orchestrator | Sunday 06 July 2025 20:00:52 +0000 (0:00:00.503) 0:01:08.955 *********** 2025-07-06 20:00:52.455265 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:00:52.455471 | orchestrator | 2025-07-06 20:00:52.455981 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-06 20:00:52.456406 | orchestrator | Sunday 06 July 2025 20:00:52 +0000 (0:00:00.143) 0:01:09.098 *********** 2025-07-06 20:00:52.616759 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'vg_name': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}) 2025-07-06 20:00:52.616977 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'vg_name': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'}) 2025-07-06 20:00:52.617535 | orchestrator | 2025-07-06 20:00:52.618310 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-06 20:00:52.619064 | orchestrator | Sunday 06 July 2025 20:00:52 +0000 (0:00:00.161) 0:01:09.260 *********** 2025-07-06 20:00:52.768146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:52.769014 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:52.769201 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:52.769685 | orchestrator | 2025-07-06 20:00:52.770346 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-06 20:00:52.770634 | orchestrator | Sunday 06 July 2025 20:00:52 +0000 (0:00:00.150) 0:01:09.410 *********** 2025-07-06 20:00:52.915735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:52.916025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:52.917023 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:52.917472 | orchestrator | 2025-07-06 20:00:52.918106 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-06 20:00:52.918770 | orchestrator | Sunday 06 July 2025 20:00:52 +0000 (0:00:00.148) 0:01:09.559 *********** 2025-07-06 20:00:53.057229 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'})  2025-07-06 20:00:53.057446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'})  2025-07-06 20:00:53.058360 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:00:53.059211 | orchestrator | 2025-07-06 20:00:53.059829 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-06 20:00:53.061069 | orchestrator | Sunday 06 July 2025 20:00:53 +0000 (0:00:00.141) 0:01:09.700 *********** 2025-07-06 20:00:53.187422 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:00:53.187572 | orchestrator |  "lvm_report": { 2025-07-06 20:00:53.188146 | orchestrator |  "lv": [ 2025-07-06 20:00:53.188508 | orchestrator |  { 2025-07-06 20:00:53.189037 | orchestrator |  "lv_name": "osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4", 2025-07-06 20:00:53.190275 | orchestrator |  "vg_name": "ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4" 2025-07-06 20:00:53.190539 | orchestrator |  }, 2025-07-06 20:00:53.190982 | orchestrator |  { 2025-07-06 20:00:53.191395 | orchestrator |  "lv_name": "osd-block-8c6cf71a-fa39-576b-8a24-237c163534df", 2025-07-06 20:00:53.192174 | orchestrator |  "vg_name": "ceph-8c6cf71a-fa39-576b-8a24-237c163534df" 2025-07-06 20:00:53.192848 | orchestrator |  } 2025-07-06 20:00:53.193369 | orchestrator |  ], 2025-07-06 20:00:53.194314 | orchestrator |  "pv": [ 2025-07-06 20:00:53.194400 | orchestrator |  { 2025-07-06 20:00:53.195052 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-06 20:00:53.196071 | orchestrator |  "vg_name": "ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4" 2025-07-06 20:00:53.196232 | orchestrator |  }, 2025-07-06 20:00:53.196971 | orchestrator |  { 2025-07-06 20:00:53.197313 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-06 20:00:53.198143 | orchestrator |  "vg_name": "ceph-8c6cf71a-fa39-576b-8a24-237c163534df" 2025-07-06 20:00:53.198888 | orchestrator |  } 2025-07-06 20:00:53.199385 | orchestrator |  ] 2025-07-06 20:00:53.200099 | orchestrator |  } 2025-07-06 20:00:53.200639 | orchestrator | } 2025-07-06 20:00:53.201237 | orchestrator | 2025-07-06 20:00:53.201867 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:00:53.202268 | orchestrator | 2025-07-06 20:00:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 20:00:53.202612 | orchestrator | 2025-07-06 20:00:53 | INFO  | Please wait and do not abort execution. 2025-07-06 20:00:53.203603 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-06 20:00:53.204070 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-06 20:00:53.204835 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-06 20:00:53.205529 | orchestrator | 2025-07-06 20:00:53.206222 | orchestrator | 2025-07-06 20:00:53.206730 | orchestrator | 2025-07-06 20:00:53.207186 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:00:53.207856 | orchestrator | Sunday 06 July 2025 20:00:53 +0000 (0:00:00.130) 0:01:09.830 *********** 2025-07-06 20:00:53.208197 | orchestrator | =============================================================================== 2025-07-06 20:00:53.208680 | orchestrator | Create block VGs -------------------------------------------------------- 5.48s 2025-07-06 20:00:53.209188 | orchestrator | Create block LVs -------------------------------------------------------- 3.97s 2025-07-06 20:00:53.209629 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-07-06 20:00:53.210062 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-07-06 20:00:53.210428 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-07-06 20:00:53.210837 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-07-06 20:00:53.211203 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.51s 2025-07-06 20:00:53.212182 | orchestrator | Add known partitions to the list of available block devices ------------- 1.42s 2025-07-06 20:00:53.212255 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-07-06 20:00:53.212605 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2025-07-06 20:00:53.213026 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2025-07-06 20:00:53.213448 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-07-06 20:00:53.213891 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.72s 2025-07-06 20:00:53.214188 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2025-07-06 20:00:53.214601 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-07-06 20:00:53.215158 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-07-06 20:00:53.215308 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2025-07-06 20:00:53.215646 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.65s 2025-07-06 20:00:53.216033 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-07-06 20:00:53.216398 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.64s 2025-07-06 20:00:55.497746 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:00:55.497841 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:00:55.497855 | orchestrator | Registering Redlock._release_script 2025-07-06 20:00:55.555650 | orchestrator | 2025-07-06 20:00:55 | INFO  | Task 6b0f26cf-bc12-4347-9bf1-ad35869c8a26 (facts) was prepared for execution. 2025-07-06 20:00:55.555743 | orchestrator | 2025-07-06 20:00:55 | INFO  | It takes a moment until task 6b0f26cf-bc12-4347-9bf1-ad35869c8a26 (facts) has been started and output is visible here. 2025-07-06 20:00:59.553961 | orchestrator | 2025-07-06 20:00:59.554189 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-06 20:00:59.555295 | orchestrator | 2025-07-06 20:00:59.555868 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 20:00:59.557109 | orchestrator | Sunday 06 July 2025 20:00:59 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-07-06 20:01:00.596976 | orchestrator | ok: [testbed-manager] 2025-07-06 20:01:00.597973 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:01:00.601803 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:01:00.603869 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:01:00.604270 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:01:00.605865 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:01:00.606443 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:01:00.607402 | orchestrator | 2025-07-06 20:01:00.608102 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 20:01:00.608732 | orchestrator | Sunday 06 July 2025 20:01:00 +0000 (0:00:01.040) 0:00:01.298 *********** 2025-07-06 20:01:00.754287 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:01:00.834680 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:01:00.918242 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:01:01.005234 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:01:01.091412 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:01:01.862369 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:01:01.866355 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:01:01.866405 | orchestrator | 2025-07-06 20:01:01.866418 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 20:01:01.867851 | orchestrator | 2025-07-06 20:01:01.867868 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 20:01:01.868541 | orchestrator | Sunday 06 July 2025 20:01:01 +0000 (0:00:01.270) 0:00:02.568 *********** 2025-07-06 20:01:06.428583 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:01:06.428839 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:01:06.429483 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:01:06.433100 | orchestrator | ok: [testbed-manager] 2025-07-06 20:01:06.433163 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:01:06.433171 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:01:06.433177 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:01:06.433185 | orchestrator | 2025-07-06 20:01:06.433192 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 20:01:06.433395 | orchestrator | 2025-07-06 20:01:06.433409 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 20:01:06.433786 | orchestrator | Sunday 06 July 2025 20:01:06 +0000 (0:00:04.569) 0:00:07.138 *********** 2025-07-06 20:01:06.572586 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:01:06.639685 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:01:06.706276 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:01:06.776161 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:01:06.844547 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:01:06.874271 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:01:06.875815 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:01:06.877296 | orchestrator | 2025-07-06 20:01:06.877984 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:01:06.878638 | orchestrator | 2025-07-06 20:01:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 20:01:06.879315 | orchestrator | 2025-07-06 20:01:06 | INFO  | Please wait and do not abort execution. 2025-07-06 20:01:06.880196 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.880993 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.881680 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.882275 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.883237 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.884017 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.884625 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:01:06.885256 | orchestrator | 2025-07-06 20:01:06.885969 | orchestrator | 2025-07-06 20:01:06.886632 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:01:06.887055 | orchestrator | Sunday 06 July 2025 20:01:06 +0000 (0:00:00.444) 0:00:07.582 *********** 2025-07-06 20:01:06.887586 | orchestrator | =============================================================================== 2025-07-06 20:01:06.888222 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.57s 2025-07-06 20:01:06.888760 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-07-06 20:01:06.889366 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.04s 2025-07-06 20:01:06.889940 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-07-06 20:01:07.406374 | orchestrator | 2025-07-06 20:01:07.409727 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jul 6 20:01:07 UTC 2025 2025-07-06 20:01:07.409761 | orchestrator | 2025-07-06 20:01:09.050405 | orchestrator | 2025-07-06 20:01:09 | INFO  | Collection nutshell is prepared for execution 2025-07-06 20:01:09.050550 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [0] - dotfiles 2025-07-06 20:01:09.054856 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:01:09.054941 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:01:09.054957 | orchestrator | Registering Redlock._release_script 2025-07-06 20:01:09.059012 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [0] - homer 2025-07-06 20:01:09.059046 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [0] - netdata 2025-07-06 20:01:09.059058 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [0] - openstackclient 2025-07-06 20:01:09.059069 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [0] - phpmyadmin 2025-07-06 20:01:09.059108 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [0] - common 2025-07-06 20:01:09.060728 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [1] -- loadbalancer 2025-07-06 20:01:09.060753 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [2] --- opensearch 2025-07-06 20:01:09.060921 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [2] --- mariadb-ng 2025-07-06 20:01:09.060942 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [3] ---- horizon 2025-07-06 20:01:09.060955 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [3] ---- keystone 2025-07-06 20:01:09.061243 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [4] ----- neutron 2025-07-06 20:01:09.061264 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [5] ------ wait-for-nova 2025-07-06 20:01:09.061356 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [5] ------ octavia 2025-07-06 20:01:09.061751 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [4] ----- barbican 2025-07-06 20:01:09.062069 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [4] ----- designate 2025-07-06 20:01:09.062094 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [4] ----- ironic 2025-07-06 20:01:09.062106 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [4] ----- placement 2025-07-06 20:01:09.062117 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [4] ----- magnum 2025-07-06 20:01:09.062339 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [1] -- openvswitch 2025-07-06 20:01:09.062360 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [2] --- ovn 2025-07-06 20:01:09.062501 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [1] -- memcached 2025-07-06 20:01:09.062756 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [1] -- redis 2025-07-06 20:01:09.062777 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [1] -- rabbitmq-ng 2025-07-06 20:01:09.062920 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [0] - kubernetes 2025-07-06 20:01:09.064783 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [1] -- kubeconfig 2025-07-06 20:01:09.064837 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [1] -- copy-kubeconfig 2025-07-06 20:01:09.064917 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [0] - ceph 2025-07-06 20:01:09.066206 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [1] -- ceph-pools 2025-07-06 20:01:09.066230 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [2] --- copy-ceph-keys 2025-07-06 20:01:09.066416 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [3] ---- cephclient 2025-07-06 20:01:09.066462 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-06 20:01:09.066474 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [4] ----- wait-for-keystone 2025-07-06 20:01:09.066486 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-06 20:01:09.066498 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [5] ------ glance 2025-07-06 20:01:09.066509 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [5] ------ cinder 2025-07-06 20:01:09.066714 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [5] ------ nova 2025-07-06 20:01:09.066736 | orchestrator | 2025-07-06 20:01:09 | INFO  | A [4] ----- prometheus 2025-07-06 20:01:09.066748 | orchestrator | 2025-07-06 20:01:09 | INFO  | D [5] ------ grafana 2025-07-06 20:01:09.244539 | orchestrator | 2025-07-06 20:01:09 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-06 20:01:09.244636 | orchestrator | 2025-07-06 20:01:09 | INFO  | Tasks are running in the background 2025-07-06 20:01:11.938949 | orchestrator | 2025-07-06 20:01:11 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-06 20:01:14.042650 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:14.048734 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:14.048991 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:14.049514 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:14.052723 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:14.052926 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:14.055564 | orchestrator | 2025-07-06 20:01:14 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:14.056555 | orchestrator | 2025-07-06 20:01:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:17.095605 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:17.096067 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:17.096789 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:17.096834 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:17.099992 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:17.100593 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:17.102908 | orchestrator | 2025-07-06 20:01:17 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:17.102937 | orchestrator | 2025-07-06 20:01:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:20.129575 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:20.135077 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:20.135152 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:20.135164 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:20.137955 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:20.138002 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:20.138069 | orchestrator | 2025-07-06 20:01:20 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:20.138084 | orchestrator | 2025-07-06 20:01:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:23.182223 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:23.182335 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:23.184280 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:23.188059 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:23.190000 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:23.191621 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:23.195465 | orchestrator | 2025-07-06 20:01:23 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:23.195503 | orchestrator | 2025-07-06 20:01:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:26.244325 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:26.244469 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:26.247685 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:26.251460 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:26.253560 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:26.263148 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:26.263212 | orchestrator | 2025-07-06 20:01:26 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:26.263227 | orchestrator | 2025-07-06 20:01:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:29.325780 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:29.325883 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:29.325965 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:29.328466 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:29.330310 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:29.334250 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:29.334290 | orchestrator | 2025-07-06 20:01:29 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:29.334304 | orchestrator | 2025-07-06 20:01:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:32.389108 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:32.391514 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:32.391555 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:32.393247 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:32.399412 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:32.399439 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state STARTED 2025-07-06 20:01:32.399452 | orchestrator | 2025-07-06 20:01:32 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:32.399465 | orchestrator | 2025-07-06 20:01:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:35.442554 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:35.444842 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:35.444882 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:35.444941 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:35.447273 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:35.449863 | orchestrator | 2025-07-06 20:01:35.449912 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-06 20:01:35.449925 | orchestrator | 2025-07-06 20:01:35.449937 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-06 20:01:35.449949 | orchestrator | Sunday 06 July 2025 20:01:20 +0000 (0:00:00.421) 0:00:00.421 *********** 2025-07-06 20:01:35.449960 | orchestrator | changed: [testbed-manager] 2025-07-06 20:01:35.449972 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:01:35.449983 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:01:35.449994 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:01:35.450005 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:01:35.450060 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:01:35.450074 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:01:35.450086 | orchestrator | 2025-07-06 20:01:35.450097 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-06 20:01:35.450109 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:04.186) 0:00:04.608 *********** 2025-07-06 20:01:35.450120 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-06 20:01:35.450132 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-06 20:01:35.450143 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-06 20:01:35.450154 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-06 20:01:35.450165 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-06 20:01:35.450176 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-06 20:01:35.450187 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-06 20:01:35.450198 | orchestrator | 2025-07-06 20:01:35.450209 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-06 20:01:35.450221 | orchestrator | Sunday 06 July 2025 20:01:26 +0000 (0:00:02.271) 0:00:06.879 *********** 2025-07-06 20:01:35.450236 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:25.673657', 'end': '2025-07-06 20:01:25.677445', 'delta': '0:00:00.003788', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450260 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:25.702103', 'end': '2025-07-06 20:01:25.710732', 'delta': '0:00:00.008629', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450300 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:25.676818', 'end': '2025-07-06 20:01:25.683358', 'delta': '0:00:00.006540', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450347 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:25.798657', 'end': '2025-07-06 20:01:25.810343', 'delta': '0:00:00.011686', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450459 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:26.233288', 'end': '2025-07-06 20:01:26.241151', 'delta': '0:00:00.007863', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450484 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:26.557475', 'end': '2025-07-06 20:01:26.563634', 'delta': '0:00:00.006159', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450515 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:01:26.733810', 'end': '2025-07-06 20:01:26.742972', 'delta': '0:00:00.009162', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:01:35.450558 | orchestrator | 2025-07-06 20:01:35.450580 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-06 20:01:35.450600 | orchestrator | Sunday 06 July 2025 20:01:29 +0000 (0:00:03.009) 0:00:09.889 *********** 2025-07-06 20:01:35.450621 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-06 20:01:35.450640 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-06 20:01:35.450660 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-06 20:01:35.450679 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-06 20:01:35.450755 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-06 20:01:35.450775 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-06 20:01:35.450796 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-06 20:01:35.450816 | orchestrator | 2025-07-06 20:01:35.450831 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-06 20:01:35.450843 | orchestrator | Sunday 06 July 2025 20:01:31 +0000 (0:00:01.787) 0:00:11.676 *********** 2025-07-06 20:01:35.450854 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-06 20:01:35.450865 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-06 20:01:35.450876 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-06 20:01:35.450887 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-06 20:01:35.450898 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-06 20:01:35.450909 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-06 20:01:35.450919 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-06 20:01:35.450930 | orchestrator | 2025-07-06 20:01:35.450941 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:01:35.450965 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.450979 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.450990 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.451001 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.451012 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.451023 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.451034 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:35.451045 | orchestrator | 2025-07-06 20:01:35.451059 | orchestrator | 2025-07-06 20:01:35.451078 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:01:35.451096 | orchestrator | Sunday 06 July 2025 20:01:34 +0000 (0:00:02.858) 0:00:14.535 *********** 2025-07-06 20:01:35.451114 | orchestrator | =============================================================================== 2025-07-06 20:01:35.451132 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.19s 2025-07-06 20:01:35.451161 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.01s 2025-07-06 20:01:35.451177 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.86s 2025-07-06 20:01:35.451196 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.27s 2025-07-06 20:01:35.451214 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.79s 2025-07-06 20:01:35.451279 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task 876907d7-2583-465f-a279-bbdf3d915047 is in state SUCCESS 2025-07-06 20:01:35.451474 | orchestrator | 2025-07-06 20:01:35 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:35.451501 | orchestrator | 2025-07-06 20:01:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:38.505148 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:38.505310 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:38.507264 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:38.514589 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:38.515024 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:38.516136 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:38.519997 | orchestrator | 2025-07-06 20:01:38 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:38.520083 | orchestrator | 2025-07-06 20:01:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:41.556926 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:41.563796 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:41.563854 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:41.566584 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:41.578566 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:41.582365 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:41.586418 | orchestrator | 2025-07-06 20:01:41 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:41.586513 | orchestrator | 2025-07-06 20:01:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:44.650559 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:44.651413 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:44.653393 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:44.653650 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:44.655315 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:44.656228 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:44.660670 | orchestrator | 2025-07-06 20:01:44 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:44.660709 | orchestrator | 2025-07-06 20:01:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:47.699916 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:47.704040 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:47.704964 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:47.707271 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:47.708137 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:47.711423 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:47.712866 | orchestrator | 2025-07-06 20:01:47 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:47.712894 | orchestrator | 2025-07-06 20:01:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:50.759738 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:50.759826 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:50.759838 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:50.759848 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:50.759921 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:50.762796 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:50.763165 | orchestrator | 2025-07-06 20:01:50 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:50.763255 | orchestrator | 2025-07-06 20:01:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:53.793164 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:53.793418 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:53.797168 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:53.799705 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:53.802161 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:53.804140 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:53.804914 | orchestrator | 2025-07-06 20:01:53 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:53.805994 | orchestrator | 2025-07-06 20:01:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:56.889638 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:56.893019 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:56.895116 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:56.901093 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:56.905752 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:56.907546 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state STARTED 2025-07-06 20:01:56.907587 | orchestrator | 2025-07-06 20:01:56 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:56.907601 | orchestrator | 2025-07-06 20:01:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:01:59.943272 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:01:59.943440 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:01:59.943806 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:01:59.944506 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:01:59.945681 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state STARTED 2025-07-06 20:01:59.946945 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task 91e7d2c3-946b-43cd-ae2d-b183a92764ab is in state SUCCESS 2025-07-06 20:01:59.950170 | orchestrator | 2025-07-06 20:01:59 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:01:59.950198 | orchestrator | 2025-07-06 20:01:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:02.994251 | orchestrator | 2025-07-06 20:02:02 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:02.994402 | orchestrator | 2025-07-06 20:02:02 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:02.997906 | orchestrator | 2025-07-06 20:02:02 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:02.997993 | orchestrator | 2025-07-06 20:02:02 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:02:02.998013 | orchestrator | 2025-07-06 20:02:02 | INFO  | Task baa4be65-5fc7-4e22-aa19-98cf42b0ae0d is in state SUCCESS 2025-07-06 20:02:02.998094 | orchestrator | 2025-07-06 20:02:02 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:02:02.998111 | orchestrator | 2025-07-06 20:02:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:06.053546 | orchestrator | 2025-07-06 20:02:06 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:06.053657 | orchestrator | 2025-07-06 20:02:06 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:06.055070 | orchestrator | 2025-07-06 20:02:06 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:06.055093 | orchestrator | 2025-07-06 20:02:06 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state STARTED 2025-07-06 20:02:06.055105 | orchestrator | 2025-07-06 20:02:06 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:02:06.055117 | orchestrator | 2025-07-06 20:02:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:09.090231 | orchestrator | 2025-07-06 20:02:09 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:09.091198 | orchestrator | 2025-07-06 20:02:09 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:09.092680 | orchestrator | 2025-07-06 20:02:09 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:09.094126 | orchestrator | 2025-07-06 20:02:09.094144 | orchestrator | 2025-07-06 20:02:09.094154 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-06 20:02:09.094160 | orchestrator | 2025-07-06 20:02:09.094166 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-06 20:02:09.094172 | orchestrator | Sunday 06 July 2025 20:01:22 +0000 (0:00:01.055) 0:00:01.055 *********** 2025-07-06 20:02:09.094177 | orchestrator | ok: [testbed-manager] => { 2025-07-06 20:02:09.094185 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-06 20:02:09.094192 | orchestrator | } 2025-07-06 20:02:09.094198 | orchestrator | 2025-07-06 20:02:09.094203 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-06 20:02:09.094208 | orchestrator | Sunday 06 July 2025 20:01:23 +0000 (0:00:00.472) 0:00:01.527 *********** 2025-07-06 20:02:09.094214 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:09.094220 | orchestrator | 2025-07-06 20:02:09.094225 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-06 20:02:09.094230 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:01.089) 0:00:02.617 *********** 2025-07-06 20:02:09.094235 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-06 20:02:09.094240 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-06 20:02:09.094246 | orchestrator | 2025-07-06 20:02:09.094251 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-06 20:02:09.094256 | orchestrator | Sunday 06 July 2025 20:01:25 +0000 (0:00:01.451) 0:00:04.068 *********** 2025-07-06 20:02:09.094261 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094266 | orchestrator | 2025-07-06 20:02:09.094285 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-06 20:02:09.094291 | orchestrator | Sunday 06 July 2025 20:01:28 +0000 (0:00:02.742) 0:00:06.811 *********** 2025-07-06 20:02:09.094296 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094301 | orchestrator | 2025-07-06 20:02:09.094306 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-06 20:02:09.094311 | orchestrator | Sunday 06 July 2025 20:01:30 +0000 (0:00:01.709) 0:00:08.520 *********** 2025-07-06 20:02:09.094316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-06 20:02:09.094322 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:09.094327 | orchestrator | 2025-07-06 20:02:09.094332 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-06 20:02:09.094337 | orchestrator | Sunday 06 July 2025 20:01:54 +0000 (0:00:24.090) 0:00:32.610 *********** 2025-07-06 20:02:09.094342 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094347 | orchestrator | 2025-07-06 20:02:09.094353 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:02:09.094358 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:09.094365 | orchestrator | 2025-07-06 20:02:09.094370 | orchestrator | 2025-07-06 20:02:09.094375 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:02:09.094380 | orchestrator | Sunday 06 July 2025 20:01:56 +0000 (0:00:01.910) 0:00:34.521 *********** 2025-07-06 20:02:09.094386 | orchestrator | =============================================================================== 2025-07-06 20:02:09.094391 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.09s 2025-07-06 20:02:09.094396 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.74s 2025-07-06 20:02:09.094401 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.91s 2025-07-06 20:02:09.094416 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.71s 2025-07-06 20:02:09.094432 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.45s 2025-07-06 20:02:09.094438 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.09s 2025-07-06 20:02:09.094443 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.47s 2025-07-06 20:02:09.094448 | orchestrator | 2025-07-06 20:02:09.094453 | orchestrator | 2025-07-06 20:02:09.094458 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-06 20:02:09.094463 | orchestrator | 2025-07-06 20:02:09.094469 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-06 20:02:09.094474 | orchestrator | Sunday 06 July 2025 20:01:40 +0000 (0:00:00.311) 0:00:00.311 *********** 2025-07-06 20:02:09.094479 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:09.094484 | orchestrator | 2025-07-06 20:02:09.094489 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-06 20:02:09.094494 | orchestrator | Sunday 06 July 2025 20:01:41 +0000 (0:00:00.830) 0:00:01.142 *********** 2025-07-06 20:02:09.094499 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-06 20:02:09.094505 | orchestrator | 2025-07-06 20:02:09.094510 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-06 20:02:09.094515 | orchestrator | Sunday 06 July 2025 20:01:41 +0000 (0:00:00.682) 0:00:01.825 *********** 2025-07-06 20:02:09.094520 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094525 | orchestrator | 2025-07-06 20:02:09.094530 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-06 20:02:09.094536 | orchestrator | Sunday 06 July 2025 20:01:42 +0000 (0:00:01.137) 0:00:02.962 *********** 2025-07-06 20:02:09.094548 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "The conditional check 'result[\"status\"][\"ActiveState\"] == \"active\"' failed. The error was: error while evaluating conditional (result[\"status\"][\"ActiveState\"] == \"active\"): 'dict object' has no attribute 'status'"} 2025-07-06 20:02:09.094556 | orchestrator | 2025-07-06 20:02:09.094570 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:02:09.094576 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-06 20:02:09.094582 | orchestrator | 2025-07-06 20:02:09.094587 | orchestrator | 2025-07-06 20:02:09.094592 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:02:09.094597 | orchestrator | Sunday 06 July 2025 20:02:00 +0000 (0:00:17.138) 0:00:20.101 *********** 2025-07-06 20:02:09.094602 | orchestrator | =============================================================================== 2025-07-06 20:02:09.094608 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 17.14s 2025-07-06 20:02:09.094613 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.14s 2025-07-06 20:02:09.094618 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.83s 2025-07-06 20:02:09.094623 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.68s 2025-07-06 20:02:09.094629 | orchestrator | 2025-07-06 20:02:09.094634 | orchestrator | 2025-07-06 20:02:09.094639 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-06 20:02:09.094644 | orchestrator | 2025-07-06 20:02:09.094652 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-06 20:02:09.094660 | orchestrator | Sunday 06 July 2025 20:01:20 +0000 (0:00:00.469) 0:00:00.469 *********** 2025-07-06 20:02:09.094669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-06 20:02:09.094680 | orchestrator | 2025-07-06 20:02:09.094689 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-06 20:02:09.094704 | orchestrator | Sunday 06 July 2025 20:01:21 +0000 (0:00:00.620) 0:00:01.090 *********** 2025-07-06 20:02:09.094713 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-06 20:02:09.094720 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-06 20:02:09.094726 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-06 20:02:09.094735 | orchestrator | 2025-07-06 20:02:09.094743 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-06 20:02:09.094751 | orchestrator | Sunday 06 July 2025 20:01:22 +0000 (0:00:01.838) 0:00:02.928 *********** 2025-07-06 20:02:09.094759 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094768 | orchestrator | 2025-07-06 20:02:09.094777 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-06 20:02:09.094786 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:01.245) 0:00:04.173 *********** 2025-07-06 20:02:09.094795 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-06 20:02:09.094805 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:09.094814 | orchestrator | 2025-07-06 20:02:09.094823 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-06 20:02:09.094832 | orchestrator | Sunday 06 July 2025 20:02:00 +0000 (0:00:36.296) 0:00:40.470 *********** 2025-07-06 20:02:09.094841 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094851 | orchestrator | 2025-07-06 20:02:09.094857 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-06 20:02:09.094864 | orchestrator | Sunday 06 July 2025 20:02:01 +0000 (0:00:01.048) 0:00:41.518 *********** 2025-07-06 20:02:09.094871 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:09.094877 | orchestrator | 2025-07-06 20:02:09.094883 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-06 20:02:09.094890 | orchestrator | Sunday 06 July 2025 20:02:02 +0000 (0:00:00.822) 0:00:42.341 *********** 2025-07-06 20:02:09.094900 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094906 | orchestrator | 2025-07-06 20:02:09.094913 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-06 20:02:09.094919 | orchestrator | Sunday 06 July 2025 20:02:04 +0000 (0:00:01.875) 0:00:44.216 *********** 2025-07-06 20:02:09.094925 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094932 | orchestrator | 2025-07-06 20:02:09.094938 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-06 20:02:09.094945 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:01.067) 0:00:45.283 *********** 2025-07-06 20:02:09.094951 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:09.094957 | orchestrator | 2025-07-06 20:02:09.094963 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-06 20:02:09.094970 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:00.614) 0:00:45.898 *********** 2025-07-06 20:02:09.094976 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:09.094982 | orchestrator | 2025-07-06 20:02:09.094988 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:02:09.094994 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:09.095001 | orchestrator | 2025-07-06 20:02:09.095007 | orchestrator | 2025-07-06 20:02:09.095013 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:02:09.095019 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.278) 0:00:46.177 *********** 2025-07-06 20:02:09.095026 | orchestrator | =============================================================================== 2025-07-06 20:02:09.095032 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.30s 2025-07-06 20:02:09.095038 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.88s 2025-07-06 20:02:09.095044 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.82s 2025-07-06 20:02:09.095056 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.25s 2025-07-06 20:02:09.095065 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.07s 2025-07-06 20:02:09.095071 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.05s 2025-07-06 20:02:09.095076 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.82s 2025-07-06 20:02:09.095081 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.63s 2025-07-06 20:02:09.095086 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.61s 2025-07-06 20:02:09.095092 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.28s 2025-07-06 20:02:09.095111 | orchestrator | 2025-07-06 20:02:09 | INFO  | Task c764b957-1544-4d3a-854d-3db9ba4835f8 is in state SUCCESS 2025-07-06 20:02:09.095679 | orchestrator | 2025-07-06 20:02:09 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:02:09.096208 | orchestrator | 2025-07-06 20:02:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:12.137709 | orchestrator | 2025-07-06 20:02:12 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:12.141211 | orchestrator | 2025-07-06 20:02:12 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:12.141788 | orchestrator | 2025-07-06 20:02:12 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:12.143098 | orchestrator | 2025-07-06 20:02:12 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:02:12.143157 | orchestrator | 2025-07-06 20:02:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:15.206952 | orchestrator | 2025-07-06 20:02:15 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:15.213063 | orchestrator | 2025-07-06 20:02:15 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:15.223077 | orchestrator | 2025-07-06 20:02:15 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:15.223140 | orchestrator | 2025-07-06 20:02:15 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:02:15.223167 | orchestrator | 2025-07-06 20:02:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:18.270593 | orchestrator | 2025-07-06 20:02:18 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:18.274282 | orchestrator | 2025-07-06 20:02:18 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:18.274342 | orchestrator | 2025-07-06 20:02:18 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:18.275854 | orchestrator | 2025-07-06 20:02:18 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state STARTED 2025-07-06 20:02:18.275911 | orchestrator | 2025-07-06 20:02:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:21.323020 | orchestrator | 2025-07-06 20:02:21 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:21.324910 | orchestrator | 2025-07-06 20:02:21 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:21.326270 | orchestrator | 2025-07-06 20:02:21 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:21.327887 | orchestrator | 2025-07-06 20:02:21 | INFO  | Task 0153faec-0a29-403a-91ba-3a9d22521351 is in state SUCCESS 2025-07-06 20:02:21.328722 | orchestrator | 2025-07-06 20:02:21.328755 | orchestrator | 2025-07-06 20:02:21.328762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:02:21.328785 | orchestrator | 2025-07-06 20:02:21.328792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:02:21.328799 | orchestrator | Sunday 06 July 2025 20:01:20 +0000 (0:00:00.665) 0:00:00.665 *********** 2025-07-06 20:02:21.328806 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-06 20:02:21.328813 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-06 20:02:21.328819 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-06 20:02:21.328826 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-06 20:02:21.328832 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-06 20:02:21.328838 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-06 20:02:21.328845 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-06 20:02:21.328851 | orchestrator | 2025-07-06 20:02:21.328857 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-06 20:02:21.328863 | orchestrator | 2025-07-06 20:02:21.328869 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-06 20:02:21.328875 | orchestrator | Sunday 06 July 2025 20:01:22 +0000 (0:00:02.196) 0:00:02.861 *********** 2025-07-06 20:02:21.328892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:02:21.328901 | orchestrator | 2025-07-06 20:02:21.328907 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-06 20:02:21.328913 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:01.869) 0:00:04.730 *********** 2025-07-06 20:02:21.328920 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:21.328927 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:21.328933 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:21.328939 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:21.328945 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:21.328951 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:21.328958 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:21.328964 | orchestrator | 2025-07-06 20:02:21.328970 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-06 20:02:21.328976 | orchestrator | Sunday 06 July 2025 20:01:27 +0000 (0:00:02.497) 0:00:07.228 *********** 2025-07-06 20:02:21.328982 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:21.328988 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:21.328995 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:21.329001 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:21.329007 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:21.329013 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:21.329019 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:21.329025 | orchestrator | 2025-07-06 20:02:21.329032 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-06 20:02:21.329038 | orchestrator | Sunday 06 July 2025 20:01:31 +0000 (0:00:03.964) 0:00:11.192 *********** 2025-07-06 20:02:21.329045 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:21.329051 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:02:21.329057 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:02:21.329063 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:02:21.329069 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:21.329076 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:21.329082 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:21.329088 | orchestrator | 2025-07-06 20:02:21.329097 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-06 20:02:21.329107 | orchestrator | Sunday 06 July 2025 20:01:34 +0000 (0:00:03.043) 0:00:14.236 *********** 2025-07-06 20:02:21.329117 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:21.329127 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:02:21.329143 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:21.329153 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:02:21.329164 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:21.329175 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:02:21.329185 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:21.329195 | orchestrator | 2025-07-06 20:02:21.329206 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-06 20:02:21.329216 | orchestrator | Sunday 06 July 2025 20:01:43 +0000 (0:00:09.078) 0:00:23.315 *********** 2025-07-06 20:02:21.329226 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:21.329236 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:21.329275 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:02:21.329283 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:02:21.329293 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:02:21.329303 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:21.329313 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:21.329324 | orchestrator | 2025-07-06 20:02:21.329336 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-06 20:02:21.329347 | orchestrator | Sunday 06 July 2025 20:01:59 +0000 (0:00:16.591) 0:00:39.907 *********** 2025-07-06 20:02:21.329359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:02:21.329368 | orchestrator | 2025-07-06 20:02:21.329377 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-06 20:02:21.329389 | orchestrator | Sunday 06 July 2025 20:02:01 +0000 (0:00:01.853) 0:00:41.760 *********** 2025-07-06 20:02:21.329406 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-06 20:02:21.329418 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-06 20:02:21.329459 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-06 20:02:21.329473 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-06 20:02:21.329497 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-06 20:02:21.329508 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-06 20:02:21.329515 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-06 20:02:21.329522 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-06 20:02:21.329528 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-06 20:02:21.329535 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-06 20:02:21.329541 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-06 20:02:21.329547 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-06 20:02:21.329553 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-06 20:02:21.329560 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-06 20:02:21.329566 | orchestrator | 2025-07-06 20:02:21.329572 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-06 20:02:21.329580 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:04.742) 0:00:46.502 *********** 2025-07-06 20:02:21.329586 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:21.329593 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:21.329599 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:21.329605 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:21.329612 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:21.329618 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:21.329624 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:21.329630 | orchestrator | 2025-07-06 20:02:21.329636 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-06 20:02:21.329643 | orchestrator | Sunday 06 July 2025 20:02:07 +0000 (0:00:01.207) 0:00:47.709 *********** 2025-07-06 20:02:21.329649 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:21.329655 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:02:21.329668 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:02:21.329674 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:02:21.329680 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:21.329687 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:21.329693 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:21.329699 | orchestrator | 2025-07-06 20:02:21.329705 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-06 20:02:21.329711 | orchestrator | Sunday 06 July 2025 20:02:09 +0000 (0:00:01.279) 0:00:48.989 *********** 2025-07-06 20:02:21.329717 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:21.329724 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:21.329730 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:21.329736 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:21.329742 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:21.329748 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:21.329754 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:21.329761 | orchestrator | 2025-07-06 20:02:21.329767 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-06 20:02:21.329773 | orchestrator | Sunday 06 July 2025 20:02:10 +0000 (0:00:01.271) 0:00:50.261 *********** 2025-07-06 20:02:21.329779 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:21.329786 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:21.329792 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:21.329798 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:21.329804 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:21.329810 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:21.329816 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:21.329823 | orchestrator | 2025-07-06 20:02:21.329829 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-06 20:02:21.329835 | orchestrator | Sunday 06 July 2025 20:02:11 +0000 (0:00:01.577) 0:00:51.838 *********** 2025-07-06 20:02:21.329842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-06 20:02:21.329850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:02:21.329857 | orchestrator | 2025-07-06 20:02:21.329863 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-06 20:02:21.329869 | orchestrator | Sunday 06 July 2025 20:02:13 +0000 (0:00:01.280) 0:00:53.119 *********** 2025-07-06 20:02:21.329875 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:21.329882 | orchestrator | 2025-07-06 20:02:21.329888 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-06 20:02:21.329894 | orchestrator | Sunday 06 July 2025 20:02:15 +0000 (0:00:01.909) 0:00:55.028 *********** 2025-07-06 20:02:21.329900 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:02:21.329906 | orchestrator | changed: [testbed-manager] 2025-07-06 20:02:21.329913 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:02:21.329919 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:02:21.329925 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:21.329931 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:21.329937 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:21.329944 | orchestrator | 2025-07-06 20:02:21.329950 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:02:21.329956 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.329964 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.329973 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.329984 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.329994 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.330000 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.330007 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:02:21.330013 | orchestrator | 2025-07-06 20:02:21.330068 | orchestrator | 2025-07-06 20:02:21.330075 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:02:21.330081 | orchestrator | Sunday 06 July 2025 20:02:18 +0000 (0:00:03.655) 0:00:58.684 *********** 2025-07-06 20:02:21.330088 | orchestrator | =============================================================================== 2025-07-06 20:02:21.330094 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.59s 2025-07-06 20:02:21.330100 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.08s 2025-07-06 20:02:21.330107 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.74s 2025-07-06 20:02:21.330113 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.96s 2025-07-06 20:02:21.330119 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.66s 2025-07-06 20:02:21.330125 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.04s 2025-07-06 20:02:21.330132 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.50s 2025-07-06 20:02:21.330138 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.20s 2025-07-06 20:02:21.330144 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.91s 2025-07-06 20:02:21.330151 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.87s 2025-07-06 20:02:21.330157 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.85s 2025-07-06 20:02:21.330163 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.58s 2025-07-06 20:02:21.330170 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.28s 2025-07-06 20:02:21.330176 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.28s 2025-07-06 20:02:21.330182 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.27s 2025-07-06 20:02:21.330188 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2025-07-06 20:02:21.330198 | orchestrator | 2025-07-06 20:02:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:24.372556 | orchestrator | 2025-07-06 20:02:24 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:24.373856 | orchestrator | 2025-07-06 20:02:24 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:24.375332 | orchestrator | 2025-07-06 20:02:24 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:24.375355 | orchestrator | 2025-07-06 20:02:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:27.416456 | orchestrator | 2025-07-06 20:02:27 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:27.416609 | orchestrator | 2025-07-06 20:02:27 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:27.418331 | orchestrator | 2025-07-06 20:02:27 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:27.418357 | orchestrator | 2025-07-06 20:02:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:30.461559 | orchestrator | 2025-07-06 20:02:30 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:30.462136 | orchestrator | 2025-07-06 20:02:30 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:30.463586 | orchestrator | 2025-07-06 20:02:30 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:30.464053 | orchestrator | 2025-07-06 20:02:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:33.507125 | orchestrator | 2025-07-06 20:02:33 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:33.510362 | orchestrator | 2025-07-06 20:02:33 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:33.512188 | orchestrator | 2025-07-06 20:02:33 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:33.512319 | orchestrator | 2025-07-06 20:02:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:36.551993 | orchestrator | 2025-07-06 20:02:36 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:36.552464 | orchestrator | 2025-07-06 20:02:36 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:36.553444 | orchestrator | 2025-07-06 20:02:36 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:36.553795 | orchestrator | 2025-07-06 20:02:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:39.602676 | orchestrator | 2025-07-06 20:02:39 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:39.604024 | orchestrator | 2025-07-06 20:02:39 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:39.605504 | orchestrator | 2025-07-06 20:02:39 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:39.607441 | orchestrator | 2025-07-06 20:02:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:42.666355 | orchestrator | 2025-07-06 20:02:42 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:42.666874 | orchestrator | 2025-07-06 20:02:42 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:42.668664 | orchestrator | 2025-07-06 20:02:42 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:42.668690 | orchestrator | 2025-07-06 20:02:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:45.730388 | orchestrator | 2025-07-06 20:02:45 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:45.731994 | orchestrator | 2025-07-06 20:02:45 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:45.732853 | orchestrator | 2025-07-06 20:02:45 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:45.733738 | orchestrator | 2025-07-06 20:02:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:48.771536 | orchestrator | 2025-07-06 20:02:48 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:48.772095 | orchestrator | 2025-07-06 20:02:48 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:48.774793 | orchestrator | 2025-07-06 20:02:48 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:48.775820 | orchestrator | 2025-07-06 20:02:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:51.820447 | orchestrator | 2025-07-06 20:02:51 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:51.821510 | orchestrator | 2025-07-06 20:02:51 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:51.822972 | orchestrator | 2025-07-06 20:02:51 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:51.823054 | orchestrator | 2025-07-06 20:02:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:54.869436 | orchestrator | 2025-07-06 20:02:54 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:54.870249 | orchestrator | 2025-07-06 20:02:54 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:54.871371 | orchestrator | 2025-07-06 20:02:54 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:54.871439 | orchestrator | 2025-07-06 20:02:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:02:57.922707 | orchestrator | 2025-07-06 20:02:57 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:02:57.923101 | orchestrator | 2025-07-06 20:02:57 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:02:57.925305 | orchestrator | 2025-07-06 20:02:57 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:02:57.925397 | orchestrator | 2025-07-06 20:02:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:00.969290 | orchestrator | 2025-07-06 20:03:00 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:00.971268 | orchestrator | 2025-07-06 20:03:00 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:00.976174 | orchestrator | 2025-07-06 20:03:00 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:00.976363 | orchestrator | 2025-07-06 20:03:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:04.021232 | orchestrator | 2025-07-06 20:03:04 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:04.022306 | orchestrator | 2025-07-06 20:03:04 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:04.024463 | orchestrator | 2025-07-06 20:03:04 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:04.024496 | orchestrator | 2025-07-06 20:03:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:07.073979 | orchestrator | 2025-07-06 20:03:07 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:07.075332 | orchestrator | 2025-07-06 20:03:07 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:07.077217 | orchestrator | 2025-07-06 20:03:07 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:07.077297 | orchestrator | 2025-07-06 20:03:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:10.126565 | orchestrator | 2025-07-06 20:03:10 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:10.126668 | orchestrator | 2025-07-06 20:03:10 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:10.126684 | orchestrator | 2025-07-06 20:03:10 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:10.126698 | orchestrator | 2025-07-06 20:03:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:13.193341 | orchestrator | 2025-07-06 20:03:13 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:13.193514 | orchestrator | 2025-07-06 20:03:13 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:13.196411 | orchestrator | 2025-07-06 20:03:13 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:13.196450 | orchestrator | 2025-07-06 20:03:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:16.254430 | orchestrator | 2025-07-06 20:03:16 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:16.256036 | orchestrator | 2025-07-06 20:03:16 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:16.256073 | orchestrator | 2025-07-06 20:03:16 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:16.256088 | orchestrator | 2025-07-06 20:03:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:19.296651 | orchestrator | 2025-07-06 20:03:19 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:19.297351 | orchestrator | 2025-07-06 20:03:19 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:19.298662 | orchestrator | 2025-07-06 20:03:19 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:19.298707 | orchestrator | 2025-07-06 20:03:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:22.338128 | orchestrator | 2025-07-06 20:03:22 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:22.343811 | orchestrator | 2025-07-06 20:03:22 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:22.343875 | orchestrator | 2025-07-06 20:03:22 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:22.343889 | orchestrator | 2025-07-06 20:03:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:25.382352 | orchestrator | 2025-07-06 20:03:25 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:25.384329 | orchestrator | 2025-07-06 20:03:25 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:25.385019 | orchestrator | 2025-07-06 20:03:25 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:25.385045 | orchestrator | 2025-07-06 20:03:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:28.434548 | orchestrator | 2025-07-06 20:03:28 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:28.435427 | orchestrator | 2025-07-06 20:03:28 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:28.435743 | orchestrator | 2025-07-06 20:03:28 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:28.436268 | orchestrator | 2025-07-06 20:03:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:31.500288 | orchestrator | 2025-07-06 20:03:31 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:31.501684 | orchestrator | 2025-07-06 20:03:31 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:31.502611 | orchestrator | 2025-07-06 20:03:31 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:31.502768 | orchestrator | 2025-07-06 20:03:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:34.543855 | orchestrator | 2025-07-06 20:03:34 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:34.545028 | orchestrator | 2025-07-06 20:03:34 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:34.546452 | orchestrator | 2025-07-06 20:03:34 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:34.546781 | orchestrator | 2025-07-06 20:03:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:37.591798 | orchestrator | 2025-07-06 20:03:37 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:37.593276 | orchestrator | 2025-07-06 20:03:37 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:37.595706 | orchestrator | 2025-07-06 20:03:37 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:37.595793 | orchestrator | 2025-07-06 20:03:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:40.648382 | orchestrator | 2025-07-06 20:03:40 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:40.648617 | orchestrator | 2025-07-06 20:03:40 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:40.649396 | orchestrator | 2025-07-06 20:03:40 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:40.649460 | orchestrator | 2025-07-06 20:03:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:43.684080 | orchestrator | 2025-07-06 20:03:43 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:43.684368 | orchestrator | 2025-07-06 20:03:43 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:43.686124 | orchestrator | 2025-07-06 20:03:43 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:43.686281 | orchestrator | 2025-07-06 20:03:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:46.736669 | orchestrator | 2025-07-06 20:03:46 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:46.738533 | orchestrator | 2025-07-06 20:03:46 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:46.740461 | orchestrator | 2025-07-06 20:03:46 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:46.740505 | orchestrator | 2025-07-06 20:03:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:49.793002 | orchestrator | 2025-07-06 20:03:49 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state STARTED 2025-07-06 20:03:49.793133 | orchestrator | 2025-07-06 20:03:49 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:49.794785 | orchestrator | 2025-07-06 20:03:49 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:49.795059 | orchestrator | 2025-07-06 20:03:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:52.836774 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task fb1c5085-37b7-4a9d-b555-845e328b5706 is in state SUCCESS 2025-07-06 20:03:52.841262 | orchestrator | 2025-07-06 20:03:52.841324 | orchestrator | 2025-07-06 20:03:52.841353 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-06 20:03:52.841367 | orchestrator | 2025-07-06 20:03:52.841379 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-06 20:03:52.841391 | orchestrator | Sunday 06 July 2025 20:01:13 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-07-06 20:03:52.841403 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:03:52.841416 | orchestrator | 2025-07-06 20:03:52.841427 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-06 20:03:52.841460 | orchestrator | Sunday 06 July 2025 20:01:14 +0000 (0:00:01.075) 0:00:01.292 *********** 2025-07-06 20:03:52.841472 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841490 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841501 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841512 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841523 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841534 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841545 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841556 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841567 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841578 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841589 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841600 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:03:52.841611 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841624 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841635 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841645 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841656 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841668 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841679 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:03:52.841690 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841701 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:03:52.841712 | orchestrator | 2025-07-06 20:03:52.841723 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-06 20:03:52.841734 | orchestrator | Sunday 06 July 2025 20:01:18 +0000 (0:00:04.164) 0:00:05.457 *********** 2025-07-06 20:03:52.841745 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:03:52.841758 | orchestrator | 2025-07-06 20:03:52.841769 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-06 20:03:52.841780 | orchestrator | Sunday 06 July 2025 20:01:19 +0000 (0:00:01.203) 0:00:06.661 *********** 2025-07-06 20:03:52.841796 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.841812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.841849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.841865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.841880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.841894 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.841915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.841930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.841944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.841972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.841991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.842099 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842248 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.842272 | orchestrator | 2025-07-06 20:03:52.842284 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-06 20:03:52.842296 | orchestrator | Sunday 06 July 2025 20:01:25 +0000 (0:00:05.123) 0:00:11.784 *********** 2025-07-06 20:03:52.842307 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842338 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842397 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:03:52.842410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842452 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:03:52.842464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842507 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:03:52.842523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842558 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:03:52.842570 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:52.842582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842623 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:52.842647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842693 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:52.842716 | orchestrator | 2025-07-06 20:03:52.842728 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-06 20:03:52.842740 | orchestrator | Sunday 06 July 2025 20:01:26 +0000 (0:00:01.464) 0:00:13.249 *********** 2025-07-06 20:03:52.842751 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842783 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842794 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:03:52.842806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842847 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:03:52.842863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842941 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:03:52.842962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.842974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.842998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.843016 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:03:52.843059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.843071 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:52.843082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.843094 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:52.843105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:03:52.843129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.843142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.843154 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:52.843165 | orchestrator | 2025-07-06 20:03:52.843177 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-06 20:03:52.843192 | orchestrator | Sunday 06 July 2025 20:01:28 +0000 (0:00:02.139) 0:00:15.388 *********** 2025-07-06 20:03:52.843204 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:03:52.843215 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:03:52.843226 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:03:52.843238 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:03:52.843249 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:52.843260 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:52.843271 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:52.843289 | orchestrator | 2025-07-06 20:03:52.843329 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-06 20:03:52.843341 | orchestrator | Sunday 06 July 2025 20:01:29 +0000 (0:00:00.964) 0:00:16.352 *********** 2025-07-06 20:03:52.843352 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:03:52.843364 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:03:52.843375 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:03:52.843386 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:03:52.843397 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:52.843408 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:52.843419 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:52.843430 | orchestrator | 2025-07-06 20:03:52.843441 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-06 20:03:52.843453 | orchestrator | Sunday 06 July 2025 20:01:30 +0000 (0:00:00.924) 0:00:17.276 *********** 2025-07-06 20:03:52.843464 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843500 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843579 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.843620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.843762 | orchestrator | 2025-07-06 20:03:52.843774 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-06 20:03:52.843785 | orchestrator | Sunday 06 July 2025 20:01:35 +0000 (0:00:05.320) 0:00:22.596 *********** 2025-07-06 20:03:52.843801 | orchestrator | [WARNING]: Skipped 2025-07-06 20:03:52.843814 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-06 20:03:52.843825 | orchestrator | to this access issue: 2025-07-06 20:03:52.843837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-06 20:03:52.843848 | orchestrator | directory 2025-07-06 20:03:52.843859 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:03:52.843870 | orchestrator | 2025-07-06 20:03:52.843882 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-06 20:03:52.843893 | orchestrator | Sunday 06 July 2025 20:01:37 +0000 (0:00:01.520) 0:00:24.117 *********** 2025-07-06 20:03:52.843904 | orchestrator | [WARNING]: Skipped 2025-07-06 20:03:52.843915 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-06 20:03:52.843926 | orchestrator | to this access issue: 2025-07-06 20:03:52.843937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-06 20:03:52.843949 | orchestrator | directory 2025-07-06 20:03:52.843960 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:03:52.843971 | orchestrator | 2025-07-06 20:03:52.843982 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-06 20:03:52.843993 | orchestrator | Sunday 06 July 2025 20:01:38 +0000 (0:00:01.037) 0:00:25.154 *********** 2025-07-06 20:03:52.844004 | orchestrator | [WARNING]: Skipped 2025-07-06 20:03:52.844016 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-06 20:03:52.844046 | orchestrator | to this access issue: 2025-07-06 20:03:52.844057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-06 20:03:52.844068 | orchestrator | directory 2025-07-06 20:03:52.844079 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:03:52.844091 | orchestrator | 2025-07-06 20:03:52.844102 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-06 20:03:52.844113 | orchestrator | Sunday 06 July 2025 20:01:39 +0000 (0:00:00.868) 0:00:26.023 *********** 2025-07-06 20:03:52.844124 | orchestrator | [WARNING]: Skipped 2025-07-06 20:03:52.844135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-06 20:03:52.844146 | orchestrator | to this access issue: 2025-07-06 20:03:52.844157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-06 20:03:52.844168 | orchestrator | directory 2025-07-06 20:03:52.844179 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:03:52.844190 | orchestrator | 2025-07-06 20:03:52.844201 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-06 20:03:52.844212 | orchestrator | Sunday 06 July 2025 20:01:40 +0000 (0:00:01.031) 0:00:27.054 *********** 2025-07-06 20:03:52.844223 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.844235 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.844246 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.844257 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.844268 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.844279 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.844290 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.844326 | orchestrator | 2025-07-06 20:03:52.844338 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-06 20:03:52.844349 | orchestrator | Sunday 06 July 2025 20:01:43 +0000 (0:00:03.620) 0:00:30.675 *********** 2025-07-06 20:03:52.844360 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844372 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844383 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844394 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844405 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844416 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844427 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:03:52.844438 | orchestrator | 2025-07-06 20:03:52.844449 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-06 20:03:52.844460 | orchestrator | Sunday 06 July 2025 20:01:47 +0000 (0:00:03.140) 0:00:33.816 *********** 2025-07-06 20:03:52.844471 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.844482 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.844493 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.844504 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.844522 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.844534 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.844545 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.844556 | orchestrator | 2025-07-06 20:03:52.844567 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-06 20:03:52.844578 | orchestrator | Sunday 06 July 2025 20:01:50 +0000 (0:00:03.328) 0:00:37.145 *********** 2025-07-06 20:03:52.844598 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.844625 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.844676 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.844717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.844772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.844797 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.844832 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.844864 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.844901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.844957 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.844977 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.844996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:03:52.845042 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845055 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845067 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845078 | orchestrator | 2025-07-06 20:03:52.845090 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-06 20:03:52.845101 | orchestrator | Sunday 06 July 2025 20:01:52 +0000 (0:00:02.107) 0:00:39.252 *********** 2025-07-06 20:03:52.845112 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845135 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845157 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845167 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845178 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:03:52.845189 | orchestrator | 2025-07-06 20:03:52.845207 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-06 20:03:52.845219 | orchestrator | Sunday 06 July 2025 20:01:55 +0000 (0:00:02.944) 0:00:42.196 *********** 2025-07-06 20:03:52.845230 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845241 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845263 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845273 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845284 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845295 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:03:52.845306 | orchestrator | 2025-07-06 20:03:52.845317 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-06 20:03:52.845327 | orchestrator | Sunday 06 July 2025 20:01:59 +0000 (0:00:03.852) 0:00:46.049 *********** 2025-07-06 20:03:52.845346 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845413 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:03:52.845507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845528 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:03:52.845662 | orchestrator | 2025-07-06 20:03:52.845674 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-06 20:03:52.845685 | orchestrator | Sunday 06 July 2025 20:02:02 +0000 (0:00:03.394) 0:00:49.443 *********** 2025-07-06 20:03:52.845702 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.845713 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.845724 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.845741 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.845752 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.845763 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.845774 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.845785 | orchestrator | 2025-07-06 20:03:52.845796 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-06 20:03:52.845807 | orchestrator | Sunday 06 July 2025 20:02:04 +0000 (0:00:01.794) 0:00:51.238 *********** 2025-07-06 20:03:52.845818 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.845829 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.845840 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.845851 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.845862 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.845872 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.845883 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.845894 | orchestrator | 2025-07-06 20:03:52.845910 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.845921 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:01.124) 0:00:52.363 *********** 2025-07-06 20:03:52.845932 | orchestrator | 2025-07-06 20:03:52.845943 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.845954 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:00.185) 0:00:52.548 *********** 2025-07-06 20:03:52.845965 | orchestrator | 2025-07-06 20:03:52.845976 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.845987 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:00.058) 0:00:52.607 *********** 2025-07-06 20:03:52.845998 | orchestrator | 2025-07-06 20:03:52.846009 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.846128 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:00.079) 0:00:52.687 *********** 2025-07-06 20:03:52.846142 | orchestrator | 2025-07-06 20:03:52.846153 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.846164 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.086) 0:00:52.774 *********** 2025-07-06 20:03:52.846175 | orchestrator | 2025-07-06 20:03:52.846185 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.846194 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.051) 0:00:52.825 *********** 2025-07-06 20:03:52.846204 | orchestrator | 2025-07-06 20:03:52.846214 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:03:52.846224 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.046) 0:00:52.872 *********** 2025-07-06 20:03:52.846233 | orchestrator | 2025-07-06 20:03:52.846243 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-06 20:03:52.846253 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.068) 0:00:52.940 *********** 2025-07-06 20:03:52.846263 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.846272 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.846282 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.846292 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.846302 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.846311 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.846321 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.846331 | orchestrator | 2025-07-06 20:03:52.846341 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-06 20:03:52.846350 | orchestrator | Sunday 06 July 2025 20:02:47 +0000 (0:00:41.741) 0:01:34.682 *********** 2025-07-06 20:03:52.846360 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.846370 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.846379 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.846389 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.846399 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.846408 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.846426 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.846436 | orchestrator | 2025-07-06 20:03:52.846446 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-06 20:03:52.846456 | orchestrator | Sunday 06 July 2025 20:03:38 +0000 (0:00:50.159) 0:02:24.842 *********** 2025-07-06 20:03:52.846465 | orchestrator | ok: [testbed-manager] 2025-07-06 20:03:52.846475 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:03:52.846485 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:03:52.846495 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:03:52.846505 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:03:52.846514 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:03:52.846524 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:03:52.846533 | orchestrator | 2025-07-06 20:03:52.846543 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-06 20:03:52.846553 | orchestrator | Sunday 06 July 2025 20:03:40 +0000 (0:00:01.967) 0:02:26.810 *********** 2025-07-06 20:03:52.846563 | orchestrator | changed: [testbed-manager] 2025-07-06 20:03:52.846572 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:03:52.846582 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:03:52.846592 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:03:52.846601 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:03:52.846611 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:03:52.846620 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:03:52.846630 | orchestrator | 2025-07-06 20:03:52.846640 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:03:52.846651 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846661 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846671 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846689 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846700 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846710 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846720 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:03:52.846729 | orchestrator | 2025-07-06 20:03:52.846739 | orchestrator | 2025-07-06 20:03:52.846749 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:03:52.846768 | orchestrator | Sunday 06 July 2025 20:03:49 +0000 (0:00:09.484) 0:02:36.295 *********** 2025-07-06 20:03:52.846778 | orchestrator | =============================================================================== 2025-07-06 20:03:52.846788 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 50.16s 2025-07-06 20:03:52.846797 | orchestrator | common : Restart fluentd container ------------------------------------- 41.74s 2025-07-06 20:03:52.846807 | orchestrator | common : Restart cron container ----------------------------------------- 9.48s 2025-07-06 20:03:52.846817 | orchestrator | common : Copying over config.json files for services -------------------- 5.32s 2025-07-06 20:03:52.846827 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.12s 2025-07-06 20:03:52.846836 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.16s 2025-07-06 20:03:52.846846 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.85s 2025-07-06 20:03:52.846861 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.62s 2025-07-06 20:03:52.846871 | orchestrator | common : Check common containers ---------------------------------------- 3.39s 2025-07-06 20:03:52.846881 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.33s 2025-07-06 20:03:52.846891 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.14s 2025-07-06 20:03:52.846900 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.94s 2025-07-06 20:03:52.846910 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.14s 2025-07-06 20:03:52.846920 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.11s 2025-07-06 20:03:52.846929 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2025-07-06 20:03:52.846939 | orchestrator | common : Creating log volume -------------------------------------------- 1.79s 2025-07-06 20:03:52.846949 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.52s 2025-07-06 20:03:52.846958 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.46s 2025-07-06 20:03:52.846968 | orchestrator | common : include_tasks -------------------------------------------------- 1.20s 2025-07-06 20:03:52.846978 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.12s 2025-07-06 20:03:52.846987 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:52.846998 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:52.847008 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state STARTED 2025-07-06 20:03:52.847190 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:03:52.847205 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:03:52.847213 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:03:52.847221 | orchestrator | 2025-07-06 20:03:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:55.882244 | orchestrator | 2025-07-06 20:03:55 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:55.882370 | orchestrator | 2025-07-06 20:03:55 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:55.882387 | orchestrator | 2025-07-06 20:03:55 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state STARTED 2025-07-06 20:03:55.882398 | orchestrator | 2025-07-06 20:03:55 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:03:55.882409 | orchestrator | 2025-07-06 20:03:55 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:03:55.883286 | orchestrator | 2025-07-06 20:03:55 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:03:55.883340 | orchestrator | 2025-07-06 20:03:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:03:58.922164 | orchestrator | 2025-07-06 20:03:58 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:03:58.922469 | orchestrator | 2025-07-06 20:03:58 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:03:58.922804 | orchestrator | 2025-07-06 20:03:58 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state STARTED 2025-07-06 20:03:58.923720 | orchestrator | 2025-07-06 20:03:58 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:03:58.923965 | orchestrator | 2025-07-06 20:03:58 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:03:58.924719 | orchestrator | 2025-07-06 20:03:58 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:03:58.924755 | orchestrator | 2025-07-06 20:03:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:01.970811 | orchestrator | 2025-07-06 20:04:01 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:01.970980 | orchestrator | 2025-07-06 20:04:01 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:01.971602 | orchestrator | 2025-07-06 20:04:01 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state STARTED 2025-07-06 20:04:01.972319 | orchestrator | 2025-07-06 20:04:01 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:04:01.972836 | orchestrator | 2025-07-06 20:04:01 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:01.973558 | orchestrator | 2025-07-06 20:04:01 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:01.975662 | orchestrator | 2025-07-06 20:04:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:05.018928 | orchestrator | 2025-07-06 20:04:05 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:05.019893 | orchestrator | 2025-07-06 20:04:05 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:05.028545 | orchestrator | 2025-07-06 20:04:05 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state STARTED 2025-07-06 20:04:05.029698 | orchestrator | 2025-07-06 20:04:05 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:04:05.030741 | orchestrator | 2025-07-06 20:04:05 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:05.038527 | orchestrator | 2025-07-06 20:04:05 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:05.038617 | orchestrator | 2025-07-06 20:04:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:08.064957 | orchestrator | 2025-07-06 20:04:08 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:08.065080 | orchestrator | 2025-07-06 20:04:08 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:08.065808 | orchestrator | 2025-07-06 20:04:08 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state STARTED 2025-07-06 20:04:08.066209 | orchestrator | 2025-07-06 20:04:08 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:04:08.069152 | orchestrator | 2025-07-06 20:04:08 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:08.069778 | orchestrator | 2025-07-06 20:04:08 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:08.069792 | orchestrator | 2025-07-06 20:04:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:11.095052 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:11.101912 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:11.102078 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task c5475a5c-4980-4e3d-8b64-765258840020 is in state SUCCESS 2025-07-06 20:04:11.102108 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:11.102131 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:04:11.102616 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:11.104362 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:11.106625 | orchestrator | 2025-07-06 20:04:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:14.143500 | orchestrator | 2025-07-06 20:04:14 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:14.144733 | orchestrator | 2025-07-06 20:04:14 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:14.145846 | orchestrator | 2025-07-06 20:04:14 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:14.147743 | orchestrator | 2025-07-06 20:04:14 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state STARTED 2025-07-06 20:04:14.148288 | orchestrator | 2025-07-06 20:04:14 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:14.149552 | orchestrator | 2025-07-06 20:04:14 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:14.149586 | orchestrator | 2025-07-06 20:04:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:17.178780 | orchestrator | 2025-07-06 20:04:17 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:17.178895 | orchestrator | 2025-07-06 20:04:17 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:17.178920 | orchestrator | 2025-07-06 20:04:17 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:17.179169 | orchestrator | 2025-07-06 20:04:17 | INFO  | Task 996bf96a-b25e-4044-aaad-cf54d9208e16 is in state SUCCESS 2025-07-06 20:04:17.180896 | orchestrator | 2025-07-06 20:04:17.180943 | orchestrator | 2025-07-06 20:04:17.180956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:04:17.180999 | orchestrator | 2025-07-06 20:04:17.181018 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:04:17.181030 | orchestrator | Sunday 06 July 2025 20:03:56 +0000 (0:00:00.664) 0:00:00.664 *********** 2025-07-06 20:04:17.181042 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:04:17.181054 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:04:17.181065 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:04:17.181075 | orchestrator | 2025-07-06 20:04:17.181087 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:04:17.181098 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:00.803) 0:00:01.468 *********** 2025-07-06 20:04:17.181110 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-06 20:04:17.181121 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-06 20:04:17.181132 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-06 20:04:17.181143 | orchestrator | 2025-07-06 20:04:17.181154 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-06 20:04:17.181165 | orchestrator | 2025-07-06 20:04:17.181176 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-06 20:04:17.181187 | orchestrator | Sunday 06 July 2025 20:03:58 +0000 (0:00:00.975) 0:00:02.444 *********** 2025-07-06 20:04:17.181198 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:04:17.181210 | orchestrator | 2025-07-06 20:04:17.181221 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-06 20:04:17.181232 | orchestrator | Sunday 06 July 2025 20:03:59 +0000 (0:00:00.727) 0:00:03.171 *********** 2025-07-06 20:04:17.181242 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-06 20:04:17.181279 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-06 20:04:17.181291 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-06 20:04:17.181302 | orchestrator | 2025-07-06 20:04:17.181313 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-06 20:04:17.181324 | orchestrator | Sunday 06 July 2025 20:03:59 +0000 (0:00:00.818) 0:00:03.990 *********** 2025-07-06 20:04:17.181334 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-06 20:04:17.181346 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-06 20:04:17.181357 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-06 20:04:17.181368 | orchestrator | 2025-07-06 20:04:17.181379 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-06 20:04:17.181390 | orchestrator | Sunday 06 July 2025 20:04:02 +0000 (0:00:02.887) 0:00:06.878 *********** 2025-07-06 20:04:17.181401 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:04:17.181411 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:04:17.181422 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:04:17.181433 | orchestrator | 2025-07-06 20:04:17.181444 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-06 20:04:17.181455 | orchestrator | Sunday 06 July 2025 20:04:05 +0000 (0:00:02.735) 0:00:09.614 *********** 2025-07-06 20:04:17.181466 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:04:17.181478 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:04:17.181490 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:04:17.181503 | orchestrator | 2025-07-06 20:04:17.181516 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:04:17.181529 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:04:17.181544 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:04:17.181557 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:04:17.181569 | orchestrator | 2025-07-06 20:04:17.181582 | orchestrator | 2025-07-06 20:04:17.181594 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:04:17.181608 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:02.553) 0:00:12.167 *********** 2025-07-06 20:04:17.181620 | orchestrator | =============================================================================== 2025-07-06 20:04:17.181633 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.89s 2025-07-06 20:04:17.181646 | orchestrator | memcached : Check memcached container ----------------------------------- 2.74s 2025-07-06 20:04:17.181659 | orchestrator | memcached : Restart memcached container --------------------------------- 2.55s 2025-07-06 20:04:17.181672 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-07-06 20:04:17.181691 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2025-07-06 20:04:17.181705 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2025-07-06 20:04:17.181718 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.73s 2025-07-06 20:04:17.181730 | orchestrator | 2025-07-06 20:04:17.181742 | orchestrator | 2025-07-06 20:04:17.181755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:04:17.181767 | orchestrator | 2025-07-06 20:04:17.181780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:04:17.181793 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:00.424) 0:00:00.424 *********** 2025-07-06 20:04:17.181807 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:04:17.181820 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:04:17.181831 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:04:17.181842 | orchestrator | 2025-07-06 20:04:17.181853 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:04:17.181883 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:00.472) 0:00:00.896 *********** 2025-07-06 20:04:17.181895 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-06 20:04:17.181906 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-06 20:04:17.181917 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-06 20:04:17.181928 | orchestrator | 2025-07-06 20:04:17.181939 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-06 20:04:17.181950 | orchestrator | 2025-07-06 20:04:17.181961 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-06 20:04:17.181998 | orchestrator | Sunday 06 July 2025 20:03:58 +0000 (0:00:00.701) 0:00:01.598 *********** 2025-07-06 20:04:17.182010 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:04:17.182096 | orchestrator | 2025-07-06 20:04:17.182108 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-06 20:04:17.182120 | orchestrator | Sunday 06 July 2025 20:03:59 +0000 (0:00:00.754) 0:00:02.352 *********** 2025-07-06 20:04:17.182134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182237 | orchestrator | 2025-07-06 20:04:17.182249 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-06 20:04:17.182260 | orchestrator | Sunday 06 July 2025 20:04:00 +0000 (0:00:01.594) 0:00:03.947 *********** 2025-07-06 20:04:17.182272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182371 | orchestrator | 2025-07-06 20:04:17.182383 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-06 20:04:17.182394 | orchestrator | Sunday 06 July 2025 20:04:03 +0000 (0:00:02.841) 0:00:06.789 *********** 2025-07-06 20:04:17.182405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182498 | orchestrator | 2025-07-06 20:04:17.182509 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-06 20:04:17.182521 | orchestrator | Sunday 06 July 2025 20:04:06 +0000 (0:00:03.220) 0:00:10.009 *********** 2025-07-06 20:04:17.182532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:04:17.182619 | orchestrator | 2025-07-06 20:04:17.182631 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-06 20:04:17.182642 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:01.563) 0:00:11.573 *********** 2025-07-06 20:04:17.182652 | orchestrator | 2025-07-06 20:04:17.182721 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-06 20:04:17.182734 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:00.058) 0:00:11.631 *********** 2025-07-06 20:04:17.182745 | orchestrator | 2025-07-06 20:04:17.182756 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-06 20:04:17.182767 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:00.067) 0:00:11.699 *********** 2025-07-06 20:04:17.182778 | orchestrator | 2025-07-06 20:04:17.182789 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-06 20:04:17.182800 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:00.082) 0:00:11.781 *********** 2025-07-06 20:04:17.182811 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:04:17.182822 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:04:17.182833 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:04:17.182844 | orchestrator | 2025-07-06 20:04:17.182855 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-06 20:04:17.182866 | orchestrator | Sunday 06 July 2025 20:04:12 +0000 (0:00:03.646) 0:00:15.428 *********** 2025-07-06 20:04:17.182877 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:04:17.182887 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:04:17.182898 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:04:17.182909 | orchestrator | 2025-07-06 20:04:17.182920 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:04:17.182931 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:04:17.182943 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:04:17.182954 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:04:17.182988 | orchestrator | 2025-07-06 20:04:17.183001 | orchestrator | 2025-07-06 20:04:17.183012 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:04:17.183031 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:03.681) 0:00:19.109 *********** 2025-07-06 20:04:17.183042 | orchestrator | =============================================================================== 2025-07-06 20:04:17.183053 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.68s 2025-07-06 20:04:17.183064 | orchestrator | redis : Restart redis container ----------------------------------------- 3.65s 2025-07-06 20:04:17.183075 | orchestrator | redis : Copying over redis config files --------------------------------- 3.22s 2025-07-06 20:04:17.183086 | orchestrator | redis : Copying over default config.json files -------------------------- 2.84s 2025-07-06 20:04:17.183097 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.59s 2025-07-06 20:04:17.183108 | orchestrator | redis : Check redis containers ------------------------------------------ 1.56s 2025-07-06 20:04:17.183119 | orchestrator | redis : include_tasks --------------------------------------------------- 0.76s 2025-07-06 20:04:17.183130 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-07-06 20:04:17.183140 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-07-06 20:04:17.183151 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-07-06 20:04:17.183343 | orchestrator | 2025-07-06 20:04:17 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:17.183360 | orchestrator | 2025-07-06 20:04:17 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:17.183372 | orchestrator | 2025-07-06 20:04:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:20.223045 | orchestrator | 2025-07-06 20:04:20 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:20.225473 | orchestrator | 2025-07-06 20:04:20 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:20.227946 | orchestrator | 2025-07-06 20:04:20 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:20.229318 | orchestrator | 2025-07-06 20:04:20 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:20.231759 | orchestrator | 2025-07-06 20:04:20 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:20.231823 | orchestrator | 2025-07-06 20:04:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:23.274357 | orchestrator | 2025-07-06 20:04:23 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:23.274458 | orchestrator | 2025-07-06 20:04:23 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:23.276029 | orchestrator | 2025-07-06 20:04:23 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:23.276055 | orchestrator | 2025-07-06 20:04:23 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:23.276068 | orchestrator | 2025-07-06 20:04:23 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:23.276081 | orchestrator | 2025-07-06 20:04:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:26.325383 | orchestrator | 2025-07-06 20:04:26 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:26.325486 | orchestrator | 2025-07-06 20:04:26 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:26.326381 | orchestrator | 2025-07-06 20:04:26 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:26.328202 | orchestrator | 2025-07-06 20:04:26 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:26.328877 | orchestrator | 2025-07-06 20:04:26 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:26.330793 | orchestrator | 2025-07-06 20:04:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:29.362869 | orchestrator | 2025-07-06 20:04:29 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:29.366993 | orchestrator | 2025-07-06 20:04:29 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:29.367741 | orchestrator | 2025-07-06 20:04:29 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:29.368571 | orchestrator | 2025-07-06 20:04:29 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:29.369196 | orchestrator | 2025-07-06 20:04:29 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:29.369350 | orchestrator | 2025-07-06 20:04:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:32.409798 | orchestrator | 2025-07-06 20:04:32 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:32.411747 | orchestrator | 2025-07-06 20:04:32 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:32.413697 | orchestrator | 2025-07-06 20:04:32 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:32.416269 | orchestrator | 2025-07-06 20:04:32 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:32.418451 | orchestrator | 2025-07-06 20:04:32 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:32.418490 | orchestrator | 2025-07-06 20:04:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:35.449565 | orchestrator | 2025-07-06 20:04:35 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:35.449788 | orchestrator | 2025-07-06 20:04:35 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:35.451692 | orchestrator | 2025-07-06 20:04:35 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:35.451729 | orchestrator | 2025-07-06 20:04:35 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:35.454138 | orchestrator | 2025-07-06 20:04:35 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:35.454173 | orchestrator | 2025-07-06 20:04:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:38.484222 | orchestrator | 2025-07-06 20:04:38 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:38.485104 | orchestrator | 2025-07-06 20:04:38 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:38.486623 | orchestrator | 2025-07-06 20:04:38 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:38.488142 | orchestrator | 2025-07-06 20:04:38 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:38.491050 | orchestrator | 2025-07-06 20:04:38 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:38.491579 | orchestrator | 2025-07-06 20:04:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:41.518711 | orchestrator | 2025-07-06 20:04:41 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:41.519152 | orchestrator | 2025-07-06 20:04:41 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:41.519718 | orchestrator | 2025-07-06 20:04:41 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:41.523027 | orchestrator | 2025-07-06 20:04:41 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:41.523116 | orchestrator | 2025-07-06 20:04:41 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:41.523139 | orchestrator | 2025-07-06 20:04:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:44.559820 | orchestrator | 2025-07-06 20:04:44 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:44.559983 | orchestrator | 2025-07-06 20:04:44 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:44.562667 | orchestrator | 2025-07-06 20:04:44 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:44.562698 | orchestrator | 2025-07-06 20:04:44 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:44.564173 | orchestrator | 2025-07-06 20:04:44 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:44.564217 | orchestrator | 2025-07-06 20:04:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:47.609591 | orchestrator | 2025-07-06 20:04:47 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:47.615465 | orchestrator | 2025-07-06 20:04:47 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:47.615643 | orchestrator | 2025-07-06 20:04:47 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:47.615672 | orchestrator | 2025-07-06 20:04:47 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:47.615816 | orchestrator | 2025-07-06 20:04:47 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:47.615845 | orchestrator | 2025-07-06 20:04:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:50.651786 | orchestrator | 2025-07-06 20:04:50 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:50.652090 | orchestrator | 2025-07-06 20:04:50 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:50.653025 | orchestrator | 2025-07-06 20:04:50 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:50.653874 | orchestrator | 2025-07-06 20:04:50 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:50.654740 | orchestrator | 2025-07-06 20:04:50 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:50.654838 | orchestrator | 2025-07-06 20:04:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:53.691971 | orchestrator | 2025-07-06 20:04:53 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:53.694250 | orchestrator | 2025-07-06 20:04:53 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:53.697013 | orchestrator | 2025-07-06 20:04:53 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:53.699097 | orchestrator | 2025-07-06 20:04:53 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:53.700842 | orchestrator | 2025-07-06 20:04:53 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:53.700943 | orchestrator | 2025-07-06 20:04:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:56.741278 | orchestrator | 2025-07-06 20:04:56 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:56.741420 | orchestrator | 2025-07-06 20:04:56 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:56.741724 | orchestrator | 2025-07-06 20:04:56 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:56.744810 | orchestrator | 2025-07-06 20:04:56 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:56.746547 | orchestrator | 2025-07-06 20:04:56 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:56.746572 | orchestrator | 2025-07-06 20:04:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:04:59.790857 | orchestrator | 2025-07-06 20:04:59 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:04:59.791672 | orchestrator | 2025-07-06 20:04:59 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:04:59.792835 | orchestrator | 2025-07-06 20:04:59 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:04:59.793954 | orchestrator | 2025-07-06 20:04:59 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:04:59.795650 | orchestrator | 2025-07-06 20:04:59 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:04:59.795742 | orchestrator | 2025-07-06 20:04:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:02.840467 | orchestrator | 2025-07-06 20:05:02 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:02.842226 | orchestrator | 2025-07-06 20:05:02 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:02.845098 | orchestrator | 2025-07-06 20:05:02 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:02.848499 | orchestrator | 2025-07-06 20:05:02 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state STARTED 2025-07-06 20:05:02.851177 | orchestrator | 2025-07-06 20:05:02 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:02.851397 | orchestrator | 2025-07-06 20:05:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:05.886534 | orchestrator | 2025-07-06 20:05:05 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:05.888710 | orchestrator | 2025-07-06 20:05:05 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:05.891397 | orchestrator | 2025-07-06 20:05:05 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:05.893770 | orchestrator | 2025-07-06 20:05:05 | INFO  | Task 8191bd1b-48b6-43b8-9085-a7dc171311d5 is in state SUCCESS 2025-07-06 20:05:05.895277 | orchestrator | 2025-07-06 20:05:05.895319 | orchestrator | 2025-07-06 20:05:05.895332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:05:05.895345 | orchestrator | 2025-07-06 20:05:05.895357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:05:05.895369 | orchestrator | Sunday 06 July 2025 20:03:58 +0000 (0:00:00.417) 0:00:00.417 *********** 2025-07-06 20:05:05.895380 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:05:05.895393 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:05:05.895404 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:05:05.895415 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:05:05.895426 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:05.895437 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:05.895448 | orchestrator | 2025-07-06 20:05:05.895459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:05:05.895471 | orchestrator | Sunday 06 July 2025 20:03:59 +0000 (0:00:01.224) 0:00:01.642 *********** 2025-07-06 20:05:05.895509 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:05:05.895521 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:05:05.895533 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:05:05.895544 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:05:05.895555 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:05:05.895567 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:05:05.895579 | orchestrator | 2025-07-06 20:05:05.895590 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-06 20:05:05.895600 | orchestrator | 2025-07-06 20:05:05.895611 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-06 20:05:05.895622 | orchestrator | Sunday 06 July 2025 20:04:00 +0000 (0:00:00.995) 0:00:02.637 *********** 2025-07-06 20:05:05.895634 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:05:05.895647 | orchestrator | 2025-07-06 20:05:05.895658 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-06 20:05:05.895669 | orchestrator | Sunday 06 July 2025 20:04:03 +0000 (0:00:02.511) 0:00:05.149 *********** 2025-07-06 20:05:05.895694 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-06 20:05:05.895707 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-06 20:05:05.895726 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-06 20:05:05.895745 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-06 20:05:05.895763 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-06 20:05:05.895782 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-06 20:05:05.895802 | orchestrator | 2025-07-06 20:05:05.895823 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-06 20:05:05.895842 | orchestrator | Sunday 06 July 2025 20:04:05 +0000 (0:00:02.200) 0:00:07.349 *********** 2025-07-06 20:05:05.895860 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-06 20:05:05.895899 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-06 20:05:05.895911 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-06 20:05:05.895922 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-06 20:05:05.895933 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-06 20:05:05.895944 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-06 20:05:05.895955 | orchestrator | 2025-07-06 20:05:05.895966 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-06 20:05:05.895977 | orchestrator | Sunday 06 July 2025 20:04:07 +0000 (0:00:02.116) 0:00:09.466 *********** 2025-07-06 20:05:05.895988 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-06 20:05:05.895999 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:05:05.896011 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-06 20:05:05.896022 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:05:05.896033 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-06 20:05:05.896044 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:05:05.896055 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-06 20:05:05.896065 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:05.896076 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-06 20:05:05.896087 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:05.896098 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-06 20:05:05.896109 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:05.896120 | orchestrator | 2025-07-06 20:05:05.896141 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-06 20:05:05.896153 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:01.192) 0:00:10.659 *********** 2025-07-06 20:05:05.896164 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:05:05.896175 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:05:05.896186 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:05:05.896196 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:05.896242 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:05.896254 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:05.896265 | orchestrator | 2025-07-06 20:05:05.896276 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-06 20:05:05.896288 | orchestrator | Sunday 06 July 2025 20:04:09 +0000 (0:00:01.188) 0:00:11.848 *********** 2025-07-06 20:05:05.896340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896531 | orchestrator | 2025-07-06 20:05:05.896543 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-06 20:05:05.896554 | orchestrator | Sunday 06 July 2025 20:04:11 +0000 (0:00:01.788) 0:00:13.636 *********** 2025-07-06 20:05:05.896566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896742 | orchestrator | 2025-07-06 20:05:05.896753 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-06 20:05:05.896772 | orchestrator | Sunday 06 July 2025 20:04:15 +0000 (0:00:04.055) 0:00:17.691 *********** 2025-07-06 20:05:05.896791 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:05:05.896811 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:05:05.896832 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:05:05.896851 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:05.896905 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:05.896920 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:05.896931 | orchestrator | 2025-07-06 20:05:05.896942 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-06 20:05:05.896953 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:00.977) 0:00:18.668 *********** 2025-07-06 20:05:05.896965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.896997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:05:05.897176 | orchestrator | 2025-07-06 20:05:05.897188 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:05:05.897199 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:02.660) 0:00:21.329 *********** 2025-07-06 20:05:05.897211 | orchestrator | 2025-07-06 20:05:05.897222 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:05:05.897233 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.301) 0:00:21.631 *********** 2025-07-06 20:05:05.897243 | orchestrator | 2025-07-06 20:05:05.897254 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:05:05.897265 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.199) 0:00:21.830 *********** 2025-07-06 20:05:05.897276 | orchestrator | 2025-07-06 20:05:05.897287 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:05:05.897298 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.103) 0:00:21.933 *********** 2025-07-06 20:05:05.897309 | orchestrator | 2025-07-06 20:05:05.897320 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:05:05.897331 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.097) 0:00:22.031 *********** 2025-07-06 20:05:05.897342 | orchestrator | 2025-07-06 20:05:05.897353 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:05:05.897363 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:00.133) 0:00:22.165 *********** 2025-07-06 20:05:05.897377 | orchestrator | 2025-07-06 20:05:05.897396 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-06 20:05:05.897414 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:00.182) 0:00:22.348 *********** 2025-07-06 20:05:05.897431 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:05:05.897449 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:05:05.897467 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:05:05.897487 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:05:05.897505 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:05:05.897523 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:05:05.897534 | orchestrator | 2025-07-06 20:05:05.897545 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-06 20:05:05.897556 | orchestrator | Sunday 06 July 2025 20:04:31 +0000 (0:00:10.867) 0:00:33.215 *********** 2025-07-06 20:05:05.897568 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:05:05.897579 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:05:05.897589 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:05:05.897600 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:05:05.897611 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:05.897622 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:05.897633 | orchestrator | 2025-07-06 20:05:05.897644 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-06 20:05:05.897655 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:02.648) 0:00:35.864 *********** 2025-07-06 20:05:05.897666 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:05:05.897677 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:05:05.897688 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:05:05.897699 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:05:05.897710 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:05:05.897720 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:05:05.897731 | orchestrator | 2025-07-06 20:05:05.897742 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-06 20:05:05.897753 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:09.123) 0:00:44.987 *********** 2025-07-06 20:05:05.897771 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-06 20:05:05.897782 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-06 20:05:05.897808 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-06 20:05:05.897828 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-06 20:05:05.897848 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-06 20:05:05.897967 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-06 20:05:05.897983 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-06 20:05:05.897994 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-06 20:05:05.898005 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-06 20:05:05.898069 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-06 20:05:05.898084 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-06 20:05:05.898096 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-06 20:05:05.898107 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:05:05.898118 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:05:05.898143 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:05:05.898162 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:05:05.898181 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:05:05.898200 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:05:05.898218 | orchestrator | 2025-07-06 20:05:05.898236 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-06 20:05:05.898255 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:06.891) 0:00:51.879 *********** 2025-07-06 20:05:05.898273 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-06 20:05:05.898292 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:05.898311 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-06 20:05:05.898330 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:05.898348 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-06 20:05:05.898365 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:05.898382 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-06 20:05:05.898401 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-06 20:05:05.898419 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-06 20:05:05.898438 | orchestrator | 2025-07-06 20:05:05.898451 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-06 20:05:05.898461 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:02.233) 0:00:54.112 *********** 2025-07-06 20:05:05.898470 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-06 20:05:05.898480 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:05.898490 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-06 20:05:05.898499 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:05.898509 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-06 20:05:05.898529 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:05.898539 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-06 20:05:05.898549 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-06 20:05:05.898559 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-06 20:05:05.898568 | orchestrator | 2025-07-06 20:05:05.898578 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-06 20:05:05.898587 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:03.568) 0:00:57.680 *********** 2025-07-06 20:05:05.898597 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:05:05.898607 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:05:05.898616 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:05:05.898626 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:05:05.898635 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:05:05.898645 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:05:05.898654 | orchestrator | 2025-07-06 20:05:05.898664 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:05:05.898675 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:05:05.898696 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:05:05.898707 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:05:05.898717 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:05:05.898726 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:05:05.898736 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:05:05.898746 | orchestrator | 2025-07-06 20:05:05.898756 | orchestrator | 2025-07-06 20:05:05.898765 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:05:05.898775 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:08.209) 0:01:05.890 *********** 2025-07-06 20:05:05.898785 | orchestrator | =============================================================================== 2025-07-06 20:05:05.898795 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.33s 2025-07-06 20:05:05.898804 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.87s 2025-07-06 20:05:05.898814 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.89s 2025-07-06 20:05:05.898823 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.06s 2025-07-06 20:05:05.898836 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.57s 2025-07-06 20:05:05.898853 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.66s 2025-07-06 20:05:05.898903 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.65s 2025-07-06 20:05:05.898921 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.51s 2025-07-06 20:05:05.898938 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.23s 2025-07-06 20:05:05.898964 | orchestrator | module-load : Load modules ---------------------------------------------- 2.20s 2025-07-06 20:05:05.898981 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.12s 2025-07-06 20:05:05.898991 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.79s 2025-07-06 20:05:05.899002 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.22s 2025-07-06 20:05:05.899019 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.19s 2025-07-06 20:05:05.899029 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.19s 2025-07-06 20:05:05.899039 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2025-07-06 20:05:05.899048 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-07-06 20:05:05.899058 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.98s 2025-07-06 20:05:05.899068 | orchestrator | 2025-07-06 20:05:05 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:05.899077 | orchestrator | 2025-07-06 20:05:05 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:05.899088 | orchestrator | 2025-07-06 20:05:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:08.943306 | orchestrator | 2025-07-06 20:05:08 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:08.943413 | orchestrator | 2025-07-06 20:05:08 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:08.948798 | orchestrator | 2025-07-06 20:05:08 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:08.948832 | orchestrator | 2025-07-06 20:05:08 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:08.948844 | orchestrator | 2025-07-06 20:05:08 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:08.948856 | orchestrator | 2025-07-06 20:05:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:11.976616 | orchestrator | 2025-07-06 20:05:11 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:11.981260 | orchestrator | 2025-07-06 20:05:11 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:11.981628 | orchestrator | 2025-07-06 20:05:11 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:11.982604 | orchestrator | 2025-07-06 20:05:11 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:11.983002 | orchestrator | 2025-07-06 20:05:11 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:11.983083 | orchestrator | 2025-07-06 20:05:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:15.014220 | orchestrator | 2025-07-06 20:05:15 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:15.015036 | orchestrator | 2025-07-06 20:05:15 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:15.015911 | orchestrator | 2025-07-06 20:05:15 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:15.016675 | orchestrator | 2025-07-06 20:05:15 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:15.017156 | orchestrator | 2025-07-06 20:05:15 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:15.017203 | orchestrator | 2025-07-06 20:05:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:18.074367 | orchestrator | 2025-07-06 20:05:18 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:18.074732 | orchestrator | 2025-07-06 20:05:18 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:18.075830 | orchestrator | 2025-07-06 20:05:18 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:18.076814 | orchestrator | 2025-07-06 20:05:18 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:18.078333 | orchestrator | 2025-07-06 20:05:18 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:18.079608 | orchestrator | 2025-07-06 20:05:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:21.129600 | orchestrator | 2025-07-06 20:05:21 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:21.132475 | orchestrator | 2025-07-06 20:05:21 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:21.133567 | orchestrator | 2025-07-06 20:05:21 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:21.134330 | orchestrator | 2025-07-06 20:05:21 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:21.138104 | orchestrator | 2025-07-06 20:05:21 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:21.138143 | orchestrator | 2025-07-06 20:05:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:24.171467 | orchestrator | 2025-07-06 20:05:24 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:24.172674 | orchestrator | 2025-07-06 20:05:24 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:24.173614 | orchestrator | 2025-07-06 20:05:24 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:24.178960 | orchestrator | 2025-07-06 20:05:24 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:24.181272 | orchestrator | 2025-07-06 20:05:24 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:24.181319 | orchestrator | 2025-07-06 20:05:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:27.235513 | orchestrator | 2025-07-06 20:05:27 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:27.240429 | orchestrator | 2025-07-06 20:05:27 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:27.245006 | orchestrator | 2025-07-06 20:05:27 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:27.256852 | orchestrator | 2025-07-06 20:05:27 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:27.260280 | orchestrator | 2025-07-06 20:05:27 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:27.260696 | orchestrator | 2025-07-06 20:05:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:30.294545 | orchestrator | 2025-07-06 20:05:30 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:30.295041 | orchestrator | 2025-07-06 20:05:30 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:30.295648 | orchestrator | 2025-07-06 20:05:30 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:30.296765 | orchestrator | 2025-07-06 20:05:30 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:30.298093 | orchestrator | 2025-07-06 20:05:30 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:30.298143 | orchestrator | 2025-07-06 20:05:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:33.343966 | orchestrator | 2025-07-06 20:05:33 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:33.346599 | orchestrator | 2025-07-06 20:05:33 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:33.349758 | orchestrator | 2025-07-06 20:05:33 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:33.352085 | orchestrator | 2025-07-06 20:05:33 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:33.353862 | orchestrator | 2025-07-06 20:05:33 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:33.353888 | orchestrator | 2025-07-06 20:05:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:36.404122 | orchestrator | 2025-07-06 20:05:36 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:36.409892 | orchestrator | 2025-07-06 20:05:36 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:36.410129 | orchestrator | 2025-07-06 20:05:36 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:36.412162 | orchestrator | 2025-07-06 20:05:36 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:36.413697 | orchestrator | 2025-07-06 20:05:36 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:36.413878 | orchestrator | 2025-07-06 20:05:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:39.469857 | orchestrator | 2025-07-06 20:05:39 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:39.471374 | orchestrator | 2025-07-06 20:05:39 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:39.474098 | orchestrator | 2025-07-06 20:05:39 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:39.475978 | orchestrator | 2025-07-06 20:05:39 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:39.478134 | orchestrator | 2025-07-06 20:05:39 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:39.478194 | orchestrator | 2025-07-06 20:05:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:42.534544 | orchestrator | 2025-07-06 20:05:42 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:42.535752 | orchestrator | 2025-07-06 20:05:42 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:42.538756 | orchestrator | 2025-07-06 20:05:42 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:42.538858 | orchestrator | 2025-07-06 20:05:42 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:42.539085 | orchestrator | 2025-07-06 20:05:42 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:42.539164 | orchestrator | 2025-07-06 20:05:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:45.579124 | orchestrator | 2025-07-06 20:05:45 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:45.579441 | orchestrator | 2025-07-06 20:05:45 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:45.580421 | orchestrator | 2025-07-06 20:05:45 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:45.583030 | orchestrator | 2025-07-06 20:05:45 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:45.586509 | orchestrator | 2025-07-06 20:05:45 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:45.586566 | orchestrator | 2025-07-06 20:05:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:48.640088 | orchestrator | 2025-07-06 20:05:48 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:48.640646 | orchestrator | 2025-07-06 20:05:48 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:48.641291 | orchestrator | 2025-07-06 20:05:48 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:48.643068 | orchestrator | 2025-07-06 20:05:48 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:48.643138 | orchestrator | 2025-07-06 20:05:48 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:48.643154 | orchestrator | 2025-07-06 20:05:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:51.685201 | orchestrator | 2025-07-06 20:05:51 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:51.685416 | orchestrator | 2025-07-06 20:05:51 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:51.687087 | orchestrator | 2025-07-06 20:05:51 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:51.688943 | orchestrator | 2025-07-06 20:05:51 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:51.692262 | orchestrator | 2025-07-06 20:05:51 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:51.692325 | orchestrator | 2025-07-06 20:05:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:54.776263 | orchestrator | 2025-07-06 20:05:54 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:54.778597 | orchestrator | 2025-07-06 20:05:54 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:54.782232 | orchestrator | 2025-07-06 20:05:54 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:54.786594 | orchestrator | 2025-07-06 20:05:54 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:54.788089 | orchestrator | 2025-07-06 20:05:54 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:54.788205 | orchestrator | 2025-07-06 20:05:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:05:57.844560 | orchestrator | 2025-07-06 20:05:57 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:05:57.850188 | orchestrator | 2025-07-06 20:05:57 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:05:57.852065 | orchestrator | 2025-07-06 20:05:57 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:05:57.854186 | orchestrator | 2025-07-06 20:05:57 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:05:57.860548 | orchestrator | 2025-07-06 20:05:57 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:05:57.860588 | orchestrator | 2025-07-06 20:05:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:00.893720 | orchestrator | 2025-07-06 20:06:00 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:06:00.893962 | orchestrator | 2025-07-06 20:06:00 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:00.894598 | orchestrator | 2025-07-06 20:06:00 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:00.898111 | orchestrator | 2025-07-06 20:06:00 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:00.899615 | orchestrator | 2025-07-06 20:06:00 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:00.900023 | orchestrator | 2025-07-06 20:06:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:03.923071 | orchestrator | 2025-07-06 20:06:03 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:06:03.923172 | orchestrator | 2025-07-06 20:06:03 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:03.923671 | orchestrator | 2025-07-06 20:06:03 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:03.924285 | orchestrator | 2025-07-06 20:06:03 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:03.925196 | orchestrator | 2025-07-06 20:06:03 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:03.925268 | orchestrator | 2025-07-06 20:06:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:06.956866 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:06:06.957097 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:06.957353 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:06.958173 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:06.960933 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:06.960985 | orchestrator | 2025-07-06 20:06:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:09.996334 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state STARTED 2025-07-06 20:06:09.997661 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:10.000186 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:10.009743 | orchestrator | 2025-07-06 20:06:10 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:10.011913 | orchestrator | 2025-07-06 20:06:10 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:10.011946 | orchestrator | 2025-07-06 20:06:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:13.044104 | orchestrator | 2025-07-06 20:06:13.044201 | orchestrator | 2025-07-06 20:06:13.044216 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-06 20:06:13.044229 | orchestrator | 2025-07-06 20:06:13.044240 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-06 20:06:13.044253 | orchestrator | Sunday 06 July 2025 20:01:13 +0000 (0:00:00.165) 0:00:00.165 *********** 2025-07-06 20:06:13.044265 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:13.044277 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:13.044289 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:13.044300 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.044311 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.044322 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.044333 | orchestrator | 2025-07-06 20:06:13.044344 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-06 20:06:13.044356 | orchestrator | Sunday 06 July 2025 20:01:14 +0000 (0:00:00.694) 0:00:00.859 *********** 2025-07-06 20:06:13.044367 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.044379 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.044400 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.044433 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.044444 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.044455 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.044475 | orchestrator | 2025-07-06 20:06:13.044494 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-06 20:06:13.044513 | orchestrator | Sunday 06 July 2025 20:01:15 +0000 (0:00:00.643) 0:00:01.503 *********** 2025-07-06 20:06:13.044542 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.044562 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.044582 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.044600 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.044619 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.044637 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.044657 | orchestrator | 2025-07-06 20:06:13.044676 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-06 20:06:13.044696 | orchestrator | Sunday 06 July 2025 20:01:16 +0000 (0:00:00.869) 0:00:02.372 *********** 2025-07-06 20:06:13.044716 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:13.044736 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:13.044805 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:13.044825 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.044844 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.044863 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.044881 | orchestrator | 2025-07-06 20:06:13.044900 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-06 20:06:13.044920 | orchestrator | Sunday 06 July 2025 20:01:18 +0000 (0:00:02.043) 0:00:04.415 *********** 2025-07-06 20:06:13.044938 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:13.044958 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:13.044977 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:13.044991 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.045002 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.045013 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.045024 | orchestrator | 2025-07-06 20:06:13.045035 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-06 20:06:13.045046 | orchestrator | Sunday 06 July 2025 20:01:19 +0000 (0:00:01.288) 0:00:05.703 *********** 2025-07-06 20:06:13.045057 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:13.045068 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:13.045079 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:13.045090 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.045101 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.045111 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.045122 | orchestrator | 2025-07-06 20:06:13.045134 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-06 20:06:13.045145 | orchestrator | Sunday 06 July 2025 20:01:20 +0000 (0:00:01.114) 0:00:06.818 *********** 2025-07-06 20:06:13.045156 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.045167 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.045178 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.045189 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.045200 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.045210 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.045221 | orchestrator | 2025-07-06 20:06:13.045232 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-06 20:06:13.045243 | orchestrator | Sunday 06 July 2025 20:01:21 +0000 (0:00:00.729) 0:00:07.547 *********** 2025-07-06 20:06:13.045254 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.045265 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.045276 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.045286 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.045297 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.045308 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.045332 | orchestrator | 2025-07-06 20:06:13.045343 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-06 20:06:13.045354 | orchestrator | Sunday 06 July 2025 20:01:22 +0000 (0:00:00.690) 0:00:08.238 *********** 2025-07-06 20:06:13.045366 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:13.045377 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:13.045388 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.045399 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:13.045410 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:13.045420 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.045431 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:13.045442 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:13.045453 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.045464 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:13.045496 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:13.045508 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.045519 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:13.045530 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:13.045541 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.045552 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:13.045563 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:13.045574 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.045584 | orchestrator | 2025-07-06 20:06:13.045595 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-06 20:06:13.045606 | orchestrator | Sunday 06 July 2025 20:01:23 +0000 (0:00:00.979) 0:00:09.217 *********** 2025-07-06 20:06:13.045617 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.045628 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.045647 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.045659 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.045670 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.045680 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.045691 | orchestrator | 2025-07-06 20:06:13.045703 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-06 20:06:13.045714 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:01.146) 0:00:10.364 *********** 2025-07-06 20:06:13.045725 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:13.045737 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:13.045767 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:13.045778 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.045789 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.045800 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.045811 | orchestrator | 2025-07-06 20:06:13.045822 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-06 20:06:13.045833 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:00.622) 0:00:10.986 *********** 2025-07-06 20:06:13.045844 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.045855 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.045866 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:13.045877 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:13.045888 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.045899 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:13.045910 | orchestrator | 2025-07-06 20:06:13.045921 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-06 20:06:13.045941 | orchestrator | Sunday 06 July 2025 20:01:43 +0000 (0:00:18.781) 0:00:29.768 *********** 2025-07-06 20:06:13.045952 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.045963 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.045974 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.045985 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.045996 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.046006 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.046130 | orchestrator | 2025-07-06 20:06:13.046147 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-06 20:06:13.046158 | orchestrator | Sunday 06 July 2025 20:01:44 +0000 (0:00:01.271) 0:00:31.039 *********** 2025-07-06 20:06:13.046170 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.046181 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.046191 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.046202 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.046213 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.046224 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.046235 | orchestrator | 2025-07-06 20:06:13.046247 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-06 20:06:13.046260 | orchestrator | Sunday 06 July 2025 20:01:46 +0000 (0:00:01.186) 0:00:32.226 *********** 2025-07-06 20:06:13.046271 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.046282 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.046293 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.046303 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.046314 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.046325 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.046336 | orchestrator | 2025-07-06 20:06:13.046347 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-06 20:06:13.046358 | orchestrator | Sunday 06 July 2025 20:01:46 +0000 (0:00:00.546) 0:00:32.772 *********** 2025-07-06 20:06:13.046369 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-07-06 20:06:13.046381 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-07-06 20:06:13.046392 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.046403 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-07-06 20:06:13.046454 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-07-06 20:06:13.046465 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.046476 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-07-06 20:06:13.046487 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-07-06 20:06:13.046498 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.046509 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-07-06 20:06:13.046520 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-07-06 20:06:13.046531 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.046542 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-07-06 20:06:13.046553 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-07-06 20:06:13.046564 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.046575 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-07-06 20:06:13.046586 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-07-06 20:06:13.046597 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.046608 | orchestrator | 2025-07-06 20:06:13.046619 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-06 20:06:13.046641 | orchestrator | Sunday 06 July 2025 20:01:47 +0000 (0:00:01.172) 0:00:33.945 *********** 2025-07-06 20:06:13.046653 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.046664 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.046675 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.046686 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.046706 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.046717 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.046728 | orchestrator | 2025-07-06 20:06:13.046739 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-06 20:06:13.046785 | orchestrator | 2025-07-06 20:06:13.046797 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-06 20:06:13.046808 | orchestrator | Sunday 06 July 2025 20:01:49 +0000 (0:00:01.800) 0:00:35.746 *********** 2025-07-06 20:06:13.046819 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.046831 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.046842 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.046852 | orchestrator | 2025-07-06 20:06:13.046864 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-06 20:06:13.046881 | orchestrator | Sunday 06 July 2025 20:01:50 +0000 (0:00:00.694) 0:00:36.440 *********** 2025-07-06 20:06:13.046892 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.046903 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.046914 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.046925 | orchestrator | 2025-07-06 20:06:13.046936 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-06 20:06:13.046948 | orchestrator | Sunday 06 July 2025 20:01:51 +0000 (0:00:01.155) 0:00:37.596 *********** 2025-07-06 20:06:13.046959 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.046970 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.046980 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.046991 | orchestrator | 2025-07-06 20:06:13.047002 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-06 20:06:13.047013 | orchestrator | Sunday 06 July 2025 20:01:52 +0000 (0:00:01.068) 0:00:38.664 *********** 2025-07-06 20:06:13.047024 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.047035 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.047046 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.047057 | orchestrator | 2025-07-06 20:06:13.047068 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-06 20:06:13.047079 | orchestrator | Sunday 06 July 2025 20:01:53 +0000 (0:00:00.700) 0:00:39.364 *********** 2025-07-06 20:06:13.047090 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.047122 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.047135 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.047146 | orchestrator | 2025-07-06 20:06:13.047157 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-06 20:06:13.047168 | orchestrator | Sunday 06 July 2025 20:01:53 +0000 (0:00:00.534) 0:00:39.899 *********** 2025-07-06 20:06:13.047179 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:06:13.047191 | orchestrator | 2025-07-06 20:06:13.047202 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-06 20:06:13.047213 | orchestrator | Sunday 06 July 2025 20:01:54 +0000 (0:00:00.705) 0:00:40.604 *********** 2025-07-06 20:06:13.047224 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.047235 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.047246 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.047257 | orchestrator | 2025-07-06 20:06:13.047268 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-06 20:06:13.047279 | orchestrator | Sunday 06 July 2025 20:01:57 +0000 (0:00:02.821) 0:00:43.426 *********** 2025-07-06 20:06:13.047290 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.047301 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.047312 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.047323 | orchestrator | 2025-07-06 20:06:13.047334 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-06 20:06:13.047346 | orchestrator | Sunday 06 July 2025 20:01:58 +0000 (0:00:00.998) 0:00:44.424 *********** 2025-07-06 20:06:13.047357 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.047368 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.047387 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.047398 | orchestrator | 2025-07-06 20:06:13.047409 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-06 20:06:13.047420 | orchestrator | Sunday 06 July 2025 20:01:59 +0000 (0:00:01.071) 0:00:45.495 *********** 2025-07-06 20:06:13.047431 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.047442 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.047453 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.047464 | orchestrator | 2025-07-06 20:06:13.047476 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-06 20:06:13.047487 | orchestrator | Sunday 06 July 2025 20:02:01 +0000 (0:00:02.076) 0:00:47.572 *********** 2025-07-06 20:06:13.047498 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.047509 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.047520 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.047531 | orchestrator | 2025-07-06 20:06:13.047542 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-06 20:06:13.047553 | orchestrator | Sunday 06 July 2025 20:02:01 +0000 (0:00:00.367) 0:00:47.940 *********** 2025-07-06 20:06:13.047564 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.047575 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.047586 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.047597 | orchestrator | 2025-07-06 20:06:13.047608 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-06 20:06:13.047619 | orchestrator | Sunday 06 July 2025 20:02:02 +0000 (0:00:00.302) 0:00:48.243 *********** 2025-07-06 20:06:13.047630 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.047641 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.047652 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.047663 | orchestrator | 2025-07-06 20:06:13.047674 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-06 20:06:13.047686 | orchestrator | Sunday 06 July 2025 20:02:04 +0000 (0:00:02.013) 0:00:50.256 *********** 2025-07-06 20:06:13.047705 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-06 20:06:13.047718 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-06 20:06:13.047729 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-06 20:06:13.047740 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-06 20:06:13.047770 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-06 20:06:13.047786 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-06 20:06:13.047798 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-06 20:06:13.047809 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-06 20:06:13.047820 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-06 20:06:13.047831 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-06 20:06:13.047842 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-06 20:06:13.047861 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-06 20:06:13.047872 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-06 20:06:13.047883 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-06 20:06:13.047894 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.047905 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.047917 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.047928 | orchestrator | 2025-07-06 20:06:13.047939 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-06 20:06:13.047950 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:55.752) 0:01:46.009 *********** 2025-07-06 20:06:13.047965 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.047983 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.048000 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.048018 | orchestrator | 2025-07-06 20:06:13.048035 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-06 20:06:13.048054 | orchestrator | Sunday 06 July 2025 20:03:00 +0000 (0:00:00.301) 0:01:46.311 *********** 2025-07-06 20:06:13.048072 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048090 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048108 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048128 | orchestrator | 2025-07-06 20:06:13.048148 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-06 20:06:13.048165 | orchestrator | Sunday 06 July 2025 20:03:01 +0000 (0:00:00.990) 0:01:47.301 *********** 2025-07-06 20:06:13.048184 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048196 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048207 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048218 | orchestrator | 2025-07-06 20:06:13.048229 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-06 20:06:13.048240 | orchestrator | Sunday 06 July 2025 20:03:02 +0000 (0:00:01.176) 0:01:48.477 *********** 2025-07-06 20:06:13.048251 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048262 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048272 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048283 | orchestrator | 2025-07-06 20:06:13.048294 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-06 20:06:13.048305 | orchestrator | Sunday 06 July 2025 20:03:15 +0000 (0:00:13.721) 0:02:02.198 *********** 2025-07-06 20:06:13.048316 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.048327 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.048338 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.048349 | orchestrator | 2025-07-06 20:06:13.048359 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-06 20:06:13.048371 | orchestrator | Sunday 06 July 2025 20:03:16 +0000 (0:00:00.766) 0:02:02.965 *********** 2025-07-06 20:06:13.048381 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.048392 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.048403 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.048414 | orchestrator | 2025-07-06 20:06:13.048425 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-06 20:06:13.048436 | orchestrator | Sunday 06 July 2025 20:03:17 +0000 (0:00:00.629) 0:02:03.595 *********** 2025-07-06 20:06:13.048447 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048457 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048468 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048479 | orchestrator | 2025-07-06 20:06:13.048490 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-06 20:06:13.048509 | orchestrator | Sunday 06 July 2025 20:03:17 +0000 (0:00:00.588) 0:02:04.183 *********** 2025-07-06 20:06:13.048521 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.048542 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.048553 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.048564 | orchestrator | 2025-07-06 20:06:13.048575 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-06 20:06:13.048586 | orchestrator | Sunday 06 July 2025 20:03:18 +0000 (0:00:00.837) 0:02:05.021 *********** 2025-07-06 20:06:13.048596 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.048607 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.048618 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.048629 | orchestrator | 2025-07-06 20:06:13.048639 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-06 20:06:13.048650 | orchestrator | Sunday 06 July 2025 20:03:19 +0000 (0:00:00.273) 0:02:05.294 *********** 2025-07-06 20:06:13.048661 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048672 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048683 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048694 | orchestrator | 2025-07-06 20:06:13.048705 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-06 20:06:13.048716 | orchestrator | Sunday 06 July 2025 20:03:19 +0000 (0:00:00.587) 0:02:05.882 *********** 2025-07-06 20:06:13.048726 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048738 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048780 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048792 | orchestrator | 2025-07-06 20:06:13.048803 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-06 20:06:13.048814 | orchestrator | Sunday 06 July 2025 20:03:20 +0000 (0:00:00.603) 0:02:06.485 *********** 2025-07-06 20:06:13.048825 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048836 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048847 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048857 | orchestrator | 2025-07-06 20:06:13.048868 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-06 20:06:13.048879 | orchestrator | Sunday 06 July 2025 20:03:21 +0000 (0:00:01.078) 0:02:07.564 *********** 2025-07-06 20:06:13.048890 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:13.048900 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:13.048911 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:13.048922 | orchestrator | 2025-07-06 20:06:13.048933 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-06 20:06:13.049600 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.754) 0:02:08.318 *********** 2025-07-06 20:06:13.049620 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.049632 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.049643 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.049654 | orchestrator | 2025-07-06 20:06:13.049665 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-06 20:06:13.049676 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.253) 0:02:08.571 *********** 2025-07-06 20:06:13.049686 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.049698 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.049708 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.049719 | orchestrator | 2025-07-06 20:06:13.049730 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-06 20:06:13.049741 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.295) 0:02:08.867 *********** 2025-07-06 20:06:13.049823 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.049836 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.049847 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.049858 | orchestrator | 2025-07-06 20:06:13.049884 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-06 20:06:13.049896 | orchestrator | Sunday 06 July 2025 20:03:23 +0000 (0:00:00.883) 0:02:09.750 *********** 2025-07-06 20:06:13.049907 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.049918 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.049929 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.049953 | orchestrator | 2025-07-06 20:06:13.049965 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-06 20:06:13.049976 | orchestrator | Sunday 06 July 2025 20:03:24 +0000 (0:00:00.580) 0:02:10.330 *********** 2025-07-06 20:06:13.049987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-06 20:06:13.049999 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-06 20:06:13.050010 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-06 20:06:13.050057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-06 20:06:13.050067 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-06 20:06:13.050077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-06 20:06:13.050086 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-06 20:06:13.050096 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-06 20:06:13.050106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-06 20:06:13.050116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-06 20:06:13.050125 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-06 20:06:13.050135 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-06 20:06:13.050145 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-06 20:06:13.050165 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-06 20:06:13.050176 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-06 20:06:13.050192 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-06 20:06:13.050202 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-06 20:06:13.050212 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-06 20:06:13.050221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-06 20:06:13.050231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-06 20:06:13.050241 | orchestrator | 2025-07-06 20:06:13.050250 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-06 20:06:13.050260 | orchestrator | 2025-07-06 20:06:13.050270 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-06 20:06:13.050279 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:02.967) 0:02:13.297 *********** 2025-07-06 20:06:13.050289 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:13.050298 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:13.050308 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:13.050318 | orchestrator | 2025-07-06 20:06:13.050328 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-06 20:06:13.050337 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.540) 0:02:13.838 *********** 2025-07-06 20:06:13.050347 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:13.050357 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:13.050366 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:13.050376 | orchestrator | 2025-07-06 20:06:13.050385 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-06 20:06:13.050395 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.595) 0:02:14.434 *********** 2025-07-06 20:06:13.050415 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:13.050425 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:13.050435 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:13.050444 | orchestrator | 2025-07-06 20:06:13.050454 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-06 20:06:13.050464 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.295) 0:02:14.730 *********** 2025-07-06 20:06:13.050473 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:06:13.050483 | orchestrator | 2025-07-06 20:06:13.050493 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-06 20:06:13.050502 | orchestrator | Sunday 06 July 2025 20:03:29 +0000 (0:00:00.680) 0:02:15.411 *********** 2025-07-06 20:06:13.050512 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.050522 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.050531 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.050541 | orchestrator | 2025-07-06 20:06:13.050550 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-06 20:06:13.050560 | orchestrator | Sunday 06 July 2025 20:03:29 +0000 (0:00:00.286) 0:02:15.698 *********** 2025-07-06 20:06:13.050570 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.050579 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.050589 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.050598 | orchestrator | 2025-07-06 20:06:13.050608 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-06 20:06:13.050618 | orchestrator | Sunday 06 July 2025 20:03:29 +0000 (0:00:00.284) 0:02:15.982 *********** 2025-07-06 20:06:13.050627 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.050637 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.050647 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.050656 | orchestrator | 2025-07-06 20:06:13.050666 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-06 20:06:13.050675 | orchestrator | Sunday 06 July 2025 20:03:30 +0000 (0:00:00.306) 0:02:16.289 *********** 2025-07-06 20:06:13.050685 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:13.050695 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:13.050705 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:13.050714 | orchestrator | 2025-07-06 20:06:13.050724 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-06 20:06:13.050733 | orchestrator | Sunday 06 July 2025 20:03:31 +0000 (0:00:01.471) 0:02:17.760 *********** 2025-07-06 20:06:13.050761 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:13.050778 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:13.050796 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:13.050811 | orchestrator | 2025-07-06 20:06:13.050826 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-06 20:06:13.050840 | orchestrator | 2025-07-06 20:06:13.050850 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-06 20:06:13.050860 | orchestrator | Sunday 06 July 2025 20:03:40 +0000 (0:00:08.610) 0:02:26.370 *********** 2025-07-06 20:06:13.050869 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.050879 | orchestrator | 2025-07-06 20:06:13.050889 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-06 20:06:13.050899 | orchestrator | Sunday 06 July 2025 20:03:41 +0000 (0:00:00.853) 0:02:27.224 *********** 2025-07-06 20:06:13.050909 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.050918 | orchestrator | 2025-07-06 20:06:13.050928 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-06 20:06:13.050938 | orchestrator | Sunday 06 July 2025 20:03:41 +0000 (0:00:00.571) 0:02:27.795 *********** 2025-07-06 20:06:13.050947 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-06 20:06:13.050957 | orchestrator | 2025-07-06 20:06:13.050966 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-06 20:06:13.050997 | orchestrator | Sunday 06 July 2025 20:03:42 +0000 (0:00:01.011) 0:02:28.807 *********** 2025-07-06 20:06:13.051007 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051017 | orchestrator | 2025-07-06 20:06:13.051034 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-06 20:06:13.051044 | orchestrator | Sunday 06 July 2025 20:03:43 +0000 (0:00:00.779) 0:02:29.586 *********** 2025-07-06 20:06:13.051059 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051069 | orchestrator | 2025-07-06 20:06:13.051079 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-06 20:06:13.051089 | orchestrator | Sunday 06 July 2025 20:03:43 +0000 (0:00:00.509) 0:02:30.095 *********** 2025-07-06 20:06:13.051098 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-06 20:06:13.051108 | orchestrator | 2025-07-06 20:06:13.051118 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-06 20:06:13.051128 | orchestrator | Sunday 06 July 2025 20:03:45 +0000 (0:00:01.449) 0:02:31.544 *********** 2025-07-06 20:06:13.051137 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-06 20:06:13.051147 | orchestrator | 2025-07-06 20:06:13.051156 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-06 20:06:13.051166 | orchestrator | Sunday 06 July 2025 20:03:46 +0000 (0:00:00.833) 0:02:32.378 *********** 2025-07-06 20:06:13.051175 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051185 | orchestrator | 2025-07-06 20:06:13.051195 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-06 20:06:13.051204 | orchestrator | Sunday 06 July 2025 20:03:46 +0000 (0:00:00.462) 0:02:32.840 *********** 2025-07-06 20:06:13.051214 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051223 | orchestrator | 2025-07-06 20:06:13.051233 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-06 20:06:13.051243 | orchestrator | 2025-07-06 20:06:13.051252 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-06 20:06:13.051262 | orchestrator | Sunday 06 July 2025 20:03:47 +0000 (0:00:00.470) 0:02:33.311 *********** 2025-07-06 20:06:13.051272 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.051281 | orchestrator | 2025-07-06 20:06:13.051291 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-06 20:06:13.051300 | orchestrator | Sunday 06 July 2025 20:03:47 +0000 (0:00:00.150) 0:02:33.461 *********** 2025-07-06 20:06:13.051310 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 20:06:13.051320 | orchestrator | 2025-07-06 20:06:13.051329 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-06 20:06:13.051339 | orchestrator | Sunday 06 July 2025 20:03:47 +0000 (0:00:00.431) 0:02:33.893 *********** 2025-07-06 20:06:13.051349 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.051358 | orchestrator | 2025-07-06 20:06:13.051368 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-06 20:06:13.051377 | orchestrator | Sunday 06 July 2025 20:03:48 +0000 (0:00:00.861) 0:02:34.755 *********** 2025-07-06 20:06:13.051387 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.051397 | orchestrator | 2025-07-06 20:06:13.051406 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-06 20:06:13.051416 | orchestrator | Sunday 06 July 2025 20:03:50 +0000 (0:00:01.649) 0:02:36.404 *********** 2025-07-06 20:06:13.051426 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051436 | orchestrator | 2025-07-06 20:06:13.051445 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-06 20:06:13.051455 | orchestrator | Sunday 06 July 2025 20:03:51 +0000 (0:00:00.867) 0:02:37.271 *********** 2025-07-06 20:06:13.051465 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.051474 | orchestrator | 2025-07-06 20:06:13.051484 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-06 20:06:13.051493 | orchestrator | Sunday 06 July 2025 20:03:51 +0000 (0:00:00.526) 0:02:37.797 *********** 2025-07-06 20:06:13.051510 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051520 | orchestrator | 2025-07-06 20:06:13.051529 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-06 20:06:13.051539 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:05.988) 0:02:43.785 *********** 2025-07-06 20:06:13.051548 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.051558 | orchestrator | 2025-07-06 20:06:13.051568 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-06 20:06:13.051577 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:10.454) 0:02:54.239 *********** 2025-07-06 20:06:13.051587 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.051597 | orchestrator | 2025-07-06 20:06:13.051606 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-06 20:06:13.051616 | orchestrator | 2025-07-06 20:06:13.051625 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-06 20:06:13.051635 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:00.497) 0:02:54.737 *********** 2025-07-06 20:06:13.051645 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.051654 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.051664 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.051673 | orchestrator | 2025-07-06 20:06:13.051683 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-06 20:06:13.051693 | orchestrator | Sunday 06 July 2025 20:04:08 +0000 (0:00:00.431) 0:02:55.168 *********** 2025-07-06 20:06:13.051702 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.051712 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.051721 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.051731 | orchestrator | 2025-07-06 20:06:13.051741 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-06 20:06:13.051773 | orchestrator | Sunday 06 July 2025 20:04:09 +0000 (0:00:00.278) 0:02:55.447 *********** 2025-07-06 20:06:13.051783 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:06:13.051793 | orchestrator | 2025-07-06 20:06:13.051802 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-06 20:06:13.051812 | orchestrator | Sunday 06 July 2025 20:04:09 +0000 (0:00:00.502) 0:02:55.949 *********** 2025-07-06 20:06:13.051822 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.051831 | orchestrator | 2025-07-06 20:06:13.051846 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-06 20:06:13.051856 | orchestrator | Sunday 06 July 2025 20:04:10 +0000 (0:00:01.138) 0:02:57.088 *********** 2025-07-06 20:06:13.051871 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.051881 | orchestrator | 2025-07-06 20:06:13.051890 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-06 20:06:13.051900 | orchestrator | Sunday 06 July 2025 20:04:11 +0000 (0:00:00.879) 0:02:57.968 *********** 2025-07-06 20:06:13.051910 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.051920 | orchestrator | 2025-07-06 20:06:13.051929 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-06 20:06:13.051939 | orchestrator | Sunday 06 July 2025 20:04:11 +0000 (0:00:00.227) 0:02:58.195 *********** 2025-07-06 20:06:13.051948 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.051958 | orchestrator | 2025-07-06 20:06:13.051968 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-06 20:06:13.051977 | orchestrator | Sunday 06 July 2025 20:04:13 +0000 (0:00:01.113) 0:02:59.308 *********** 2025-07-06 20:06:13.051987 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.051997 | orchestrator | 2025-07-06 20:06:13.052007 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-06 20:06:13.052017 | orchestrator | Sunday 06 July 2025 20:04:13 +0000 (0:00:00.207) 0:02:59.516 *********** 2025-07-06 20:06:13.052026 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.052042 | orchestrator | 2025-07-06 20:06:13.052052 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-06 20:06:13.052062 | orchestrator | Sunday 06 July 2025 20:04:13 +0000 (0:00:00.168) 0:02:59.685 *********** 2025-07-06 20:06:13.052071 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.052081 | orchestrator | 2025-07-06 20:06:13.052091 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-06 20:06:13.052100 | orchestrator | Sunday 06 July 2025 20:04:13 +0000 (0:00:00.149) 0:02:59.835 *********** 2025-07-06 20:06:13.052110 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.052120 | orchestrator | 2025-07-06 20:06:13.052129 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-06 20:06:13.052139 | orchestrator | Sunday 06 July 2025 20:04:13 +0000 (0:00:00.175) 0:03:00.010 *********** 2025-07-06 20:06:13.052149 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.052158 | orchestrator | 2025-07-06 20:06:13.052168 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-06 20:06:13.052177 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:04.368) 0:03:04.378 *********** 2025-07-06 20:06:13.052187 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-07-06 20:06:13.052197 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-07-06 20:06:13.052207 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-07-06 20:06:13.052216 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-07-06 20:06:13.052226 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-07-06 20:06:13.052235 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-07-06 20:06:13.052245 | orchestrator | 2025-07-06 20:06:13.052255 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-06 20:06:13.052264 | orchestrator | Sunday 06 July 2025 20:05:38 +0000 (0:01:20.360) 0:04:24.739 *********** 2025-07-06 20:06:13.052274 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.052284 | orchestrator | 2025-07-06 20:06:13.052293 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-06 20:06:13.052303 | orchestrator | Sunday 06 July 2025 20:05:39 +0000 (0:00:01.409) 0:04:26.148 *********** 2025-07-06 20:06:13.052313 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.052322 | orchestrator | 2025-07-06 20:06:13.052332 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-06 20:06:13.052341 | orchestrator | Sunday 06 July 2025 20:05:41 +0000 (0:00:01.673) 0:04:27.822 *********** 2025-07-06 20:06:13.052351 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-06 20:06:13.052360 | orchestrator | 2025-07-06 20:06:13.052370 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-06 20:06:13.052380 | orchestrator | Sunday 06 July 2025 20:05:43 +0000 (0:00:01.825) 0:04:29.648 *********** 2025-07-06 20:06:13.052389 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.052399 | orchestrator | 2025-07-06 20:06:13.052408 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-06 20:06:13.052418 | orchestrator | Sunday 06 July 2025 20:05:43 +0000 (0:00:00.220) 0:04:29.868 *********** 2025-07-06 20:06:13.052428 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-07-06 20:06:13.052438 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-07-06 20:06:13.052447 | orchestrator | 2025-07-06 20:06:13.052457 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-06 20:06:13.052467 | orchestrator | Sunday 06 July 2025 20:05:46 +0000 (0:00:02.370) 0:04:32.239 *********** 2025-07-06 20:06:13.052476 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.052491 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.052501 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.052511 | orchestrator | 2025-07-06 20:06:13.052520 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-06 20:06:13.052530 | orchestrator | Sunday 06 July 2025 20:05:46 +0000 (0:00:00.459) 0:04:32.698 *********** 2025-07-06 20:06:13.052540 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.052549 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.052559 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.052568 | orchestrator | 2025-07-06 20:06:13.052584 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-06 20:06:13.052594 | orchestrator | 2025-07-06 20:06:13.052604 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-06 20:06:13.052618 | orchestrator | Sunday 06 July 2025 20:05:47 +0000 (0:00:00.982) 0:04:33.681 *********** 2025-07-06 20:06:13.052628 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:13.052637 | orchestrator | 2025-07-06 20:06:13.052647 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-06 20:06:13.052656 | orchestrator | Sunday 06 July 2025 20:05:47 +0000 (0:00:00.465) 0:04:34.146 *********** 2025-07-06 20:06:13.052666 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 20:06:13.052676 | orchestrator | 2025-07-06 20:06:13.052685 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-06 20:06:13.052695 | orchestrator | Sunday 06 July 2025 20:05:48 +0000 (0:00:00.220) 0:04:34.367 *********** 2025-07-06 20:06:13.052704 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:13.052714 | orchestrator | 2025-07-06 20:06:13.052724 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-06 20:06:13.052733 | orchestrator | 2025-07-06 20:06:13.052762 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-06 20:06:13.052774 | orchestrator | Sunday 06 July 2025 20:05:54 +0000 (0:00:06.366) 0:04:40.733 *********** 2025-07-06 20:06:13.052783 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:13.052793 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:13.052803 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:13.052812 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:13.052822 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:13.052832 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:13.052841 | orchestrator | 2025-07-06 20:06:13.052851 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-06 20:06:13.052860 | orchestrator | Sunday 06 July 2025 20:05:55 +0000 (0:00:01.132) 0:04:41.866 *********** 2025-07-06 20:06:13.052870 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-06 20:06:13.052880 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-06 20:06:13.052890 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-06 20:06:13.052899 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-06 20:06:13.052909 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-06 20:06:13.052918 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-06 20:06:13.052928 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-06 20:06:13.052938 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-06 20:06:13.052948 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-06 20:06:13.052957 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-06 20:06:13.052967 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-06 20:06:13.052976 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-06 20:06:13.052992 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-06 20:06:13.053002 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-06 20:06:13.053011 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-06 20:06:13.053021 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-06 20:06:13.053031 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-06 20:06:13.053041 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-06 20:06:13.053050 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-06 20:06:13.053060 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-06 20:06:13.053070 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-06 20:06:13.053079 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-06 20:06:13.053089 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-06 20:06:13.053098 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-06 20:06:13.053108 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-06 20:06:13.053118 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-06 20:06:13.053127 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-06 20:06:13.053137 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-06 20:06:13.053146 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-06 20:06:13.053156 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-06 20:06:13.053166 | orchestrator | 2025-07-06 20:06:13.053176 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-06 20:06:13.053192 | orchestrator | Sunday 06 July 2025 20:06:09 +0000 (0:00:13.534) 0:04:55.401 *********** 2025-07-06 20:06:13.053202 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.053211 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.053226 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.053236 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.053245 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.053255 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.053265 | orchestrator | 2025-07-06 20:06:13.053275 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-06 20:06:13.053285 | orchestrator | Sunday 06 July 2025 20:06:09 +0000 (0:00:00.411) 0:04:55.812 *********** 2025-07-06 20:06:13.053294 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:13.053304 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:13.053314 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:13.053323 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:13.053333 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:13.053342 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:13.053352 | orchestrator | 2025-07-06 20:06:13.053362 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:13.053372 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:13.053382 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-07-06 20:06:13.053392 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-07-06 20:06:13.053408 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-07-06 20:06:13.053418 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-07-06 20:06:13.053428 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-07-06 20:06:13.053438 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-07-06 20:06:13.053448 | orchestrator | 2025-07-06 20:06:13.053457 | orchestrator | 2025-07-06 20:06:13.053467 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:13.053477 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:00.475) 0:04:56.288 *********** 2025-07-06 20:06:13.053487 | orchestrator | =============================================================================== 2025-07-06 20:06:13.053496 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 80.36s 2025-07-06 20:06:13.053506 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.75s 2025-07-06 20:06:13.053516 | orchestrator | k3s_download : Download k3s binary x64 --------------------------------- 18.78s 2025-07-06 20:06:13.053525 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 13.72s 2025-07-06 20:06:13.053535 | orchestrator | Manage labels ---------------------------------------------------------- 13.53s 2025-07-06 20:06:13.053545 | orchestrator | kubectl : Install required packages ------------------------------------ 10.45s 2025-07-06 20:06:13.053555 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.61s 2025-07-06 20:06:13.053564 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.37s 2025-07-06 20:06:13.053574 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.99s 2025-07-06 20:06:13.053584 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.37s 2025-07-06 20:06:13.053593 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2025-07-06 20:06:13.053603 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.82s 2025-07-06 20:06:13.053613 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.37s 2025-07-06 20:06:13.053623 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.08s 2025-07-06 20:06:13.053633 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2025-07-06 20:06:13.053642 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.01s 2025-07-06 20:06:13.053652 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.83s 2025-07-06 20:06:13.053662 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.80s 2025-07-06 20:06:13.053671 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.67s 2025-07-06 20:06:13.053681 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.65s 2025-07-06 20:06:13.053691 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task eb95468e-18c9-49b1-aa8b-d0264298feaa is in state SUCCESS 2025-07-06 20:06:13.053982 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:13.054009 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:13.054093 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task a99b45e0-1be9-46ea-b03e-138273ea6a25 is in state STARTED 2025-07-06 20:06:13.054125 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:13.054143 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task 5fea17f9-652d-4093-a477-46e5fb4d5b98 is in state STARTED 2025-07-06 20:06:13.054161 | orchestrator | 2025-07-06 20:06:13 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:13.054173 | orchestrator | 2025-07-06 20:06:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:16.084119 | orchestrator | 2025-07-06 20:06:16 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:16.084209 | orchestrator | 2025-07-06 20:06:16 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:16.084221 | orchestrator | 2025-07-06 20:06:16 | INFO  | Task a99b45e0-1be9-46ea-b03e-138273ea6a25 is in state STARTED 2025-07-06 20:06:16.084241 | orchestrator | 2025-07-06 20:06:16 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:16.084622 | orchestrator | 2025-07-06 20:06:16 | INFO  | Task 5fea17f9-652d-4093-a477-46e5fb4d5b98 is in state STARTED 2025-07-06 20:06:16.085430 | orchestrator | 2025-07-06 20:06:16 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:16.085542 | orchestrator | 2025-07-06 20:06:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:19.127391 | orchestrator | 2025-07-06 20:06:19 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:19.129217 | orchestrator | 2025-07-06 20:06:19 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:19.131178 | orchestrator | 2025-07-06 20:06:19 | INFO  | Task a99b45e0-1be9-46ea-b03e-138273ea6a25 is in state STARTED 2025-07-06 20:06:19.132842 | orchestrator | 2025-07-06 20:06:19 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:19.134479 | orchestrator | 2025-07-06 20:06:19 | INFO  | Task 5fea17f9-652d-4093-a477-46e5fb4d5b98 is in state SUCCESS 2025-07-06 20:06:19.135373 | orchestrator | 2025-07-06 20:06:19 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:19.135530 | orchestrator | 2025-07-06 20:06:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:22.171612 | orchestrator | 2025-07-06 20:06:22 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:22.173685 | orchestrator | 2025-07-06 20:06:22 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:22.175171 | orchestrator | 2025-07-06 20:06:22 | INFO  | Task a99b45e0-1be9-46ea-b03e-138273ea6a25 is in state SUCCESS 2025-07-06 20:06:22.177203 | orchestrator | 2025-07-06 20:06:22 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:22.178781 | orchestrator | 2025-07-06 20:06:22 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:22.178831 | orchestrator | 2025-07-06 20:06:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:25.233868 | orchestrator | 2025-07-06 20:06:25 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:25.236363 | orchestrator | 2025-07-06 20:06:25 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:25.240619 | orchestrator | 2025-07-06 20:06:25 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:25.243341 | orchestrator | 2025-07-06 20:06:25 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:25.243474 | orchestrator | 2025-07-06 20:06:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:28.274293 | orchestrator | 2025-07-06 20:06:28 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:28.275233 | orchestrator | 2025-07-06 20:06:28 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state STARTED 2025-07-06 20:06:28.277047 | orchestrator | 2025-07-06 20:06:28 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:28.278395 | orchestrator | 2025-07-06 20:06:28 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:28.278429 | orchestrator | 2025-07-06 20:06:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:31.310409 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:31.310826 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task bc4c1149-75f1-41b2-a300-dcdef984d97f is in state SUCCESS 2025-07-06 20:06:31.311916 | orchestrator | 2025-07-06 20:06:31.312010 | orchestrator | 2025-07-06 20:06:31.312035 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-06 20:06:31.312055 | orchestrator | 2025-07-06 20:06:31.312074 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-06 20:06:31.312093 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:00.153) 0:00:00.153 *********** 2025-07-06 20:06:31.312111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-06 20:06:31.312130 | orchestrator | 2025-07-06 20:06:31.312148 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-06 20:06:31.312167 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.794) 0:00:00.948 *********** 2025-07-06 20:06:31.312184 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:31.312201 | orchestrator | 2025-07-06 20:06:31.312218 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-06 20:06:31.312236 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:01.138) 0:00:02.086 *********** 2025-07-06 20:06:31.312254 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:31.312272 | orchestrator | 2025-07-06 20:06:31.312359 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:31.312381 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.312404 | orchestrator | 2025-07-06 20:06:31.312425 | orchestrator | 2025-07-06 20:06:31.312446 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:31.312467 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:00.350) 0:00:02.437 *********** 2025-07-06 20:06:31.312490 | orchestrator | =============================================================================== 2025-07-06 20:06:31.312511 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2025-07-06 20:06:31.312534 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-07-06 20:06:31.312556 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.35s 2025-07-06 20:06:31.312577 | orchestrator | 2025-07-06 20:06:31.312604 | orchestrator | 2025-07-06 20:06:31.312625 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-06 20:06:31.312645 | orchestrator | 2025-07-06 20:06:31.312666 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-06 20:06:31.312685 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.129) 0:00:00.129 *********** 2025-07-06 20:06:31.312704 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:31.312753 | orchestrator | 2025-07-06 20:06:31.312773 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-06 20:06:31.312792 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.508) 0:00:00.638 *********** 2025-07-06 20:06:31.312842 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:31.312861 | orchestrator | 2025-07-06 20:06:31.312879 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-06 20:06:31.312897 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:00.646) 0:00:01.284 *********** 2025-07-06 20:06:31.312914 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-06 20:06:31.312932 | orchestrator | 2025-07-06 20:06:31.312949 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-06 20:06:31.312966 | orchestrator | Sunday 06 July 2025 20:06:16 +0000 (0:00:00.609) 0:00:01.894 *********** 2025-07-06 20:06:31.312984 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:31.313001 | orchestrator | 2025-07-06 20:06:31.313018 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-06 20:06:31.313035 | orchestrator | Sunday 06 July 2025 20:06:16 +0000 (0:00:00.920) 0:00:02.814 *********** 2025-07-06 20:06:31.313052 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:31.313070 | orchestrator | 2025-07-06 20:06:31.313087 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-06 20:06:31.313105 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:00.688) 0:00:03.503 *********** 2025-07-06 20:06:31.313123 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-06 20:06:31.313140 | orchestrator | 2025-07-06 20:06:31.313158 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-06 20:06:31.313176 | orchestrator | Sunday 06 July 2025 20:06:18 +0000 (0:00:01.158) 0:00:04.661 *********** 2025-07-06 20:06:31.313194 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-06 20:06:31.313211 | orchestrator | 2025-07-06 20:06:31.313229 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-06 20:06:31.313248 | orchestrator | Sunday 06 July 2025 20:06:19 +0000 (0:00:00.622) 0:00:05.283 *********** 2025-07-06 20:06:31.313265 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:31.313283 | orchestrator | 2025-07-06 20:06:31.313300 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-06 20:06:31.313318 | orchestrator | Sunday 06 July 2025 20:06:19 +0000 (0:00:00.352) 0:00:05.636 *********** 2025-07-06 20:06:31.313338 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:31.313356 | orchestrator | 2025-07-06 20:06:31.313373 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:31.313391 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.313410 | orchestrator | 2025-07-06 20:06:31.313427 | orchestrator | 2025-07-06 20:06:31.313444 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:31.313462 | orchestrator | Sunday 06 July 2025 20:06:20 +0000 (0:00:00.281) 0:00:05.917 *********** 2025-07-06 20:06:31.313479 | orchestrator | =============================================================================== 2025-07-06 20:06:31.313497 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.16s 2025-07-06 20:06:31.313529 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.92s 2025-07-06 20:06:31.313547 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.69s 2025-07-06 20:06:31.313589 | orchestrator | Create .kube directory -------------------------------------------------- 0.65s 2025-07-06 20:06:31.313608 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.62s 2025-07-06 20:06:31.313626 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.61s 2025-07-06 20:06:31.313644 | orchestrator | Get home directory of operator user ------------------------------------- 0.51s 2025-07-06 20:06:31.313661 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2025-07-06 20:06:31.313679 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-07-06 20:06:31.313697 | orchestrator | 2025-07-06 20:06:31.313741 | orchestrator | 2025-07-06 20:06:31.313761 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-06 20:06:31.313798 | orchestrator | 2025-07-06 20:06:31.313816 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-06 20:06:31.313834 | orchestrator | Sunday 06 July 2025 20:04:14 +0000 (0:00:00.089) 0:00:00.089 *********** 2025-07-06 20:06:31.313853 | orchestrator | ok: [localhost] => { 2025-07-06 20:06:31.313873 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-06 20:06:31.313892 | orchestrator | } 2025-07-06 20:06:31.313910 | orchestrator | 2025-07-06 20:06:31.313929 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-06 20:06:31.313947 | orchestrator | Sunday 06 July 2025 20:04:14 +0000 (0:00:00.030) 0:00:00.119 *********** 2025-07-06 20:06:31.313966 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-06 20:06:31.313986 | orchestrator | ...ignoring 2025-07-06 20:06:31.314005 | orchestrator | 2025-07-06 20:06:31.314102 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-06 20:06:31.314122 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:03.129) 0:00:03.248 *********** 2025-07-06 20:06:31.314141 | orchestrator | skipping: [localhost] 2025-07-06 20:06:31.314161 | orchestrator | 2025-07-06 20:06:31.314179 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-06 20:06:31.314199 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.082) 0:00:03.331 *********** 2025-07-06 20:06:31.314218 | orchestrator | ok: [localhost] 2025-07-06 20:06:31.314238 | orchestrator | 2025-07-06 20:06:31.314256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:06:31.314276 | orchestrator | 2025-07-06 20:06:31.314293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:06:31.314309 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.223) 0:00:03.554 *********** 2025-07-06 20:06:31.314327 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:31.314344 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:31.314362 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:31.314378 | orchestrator | 2025-07-06 20:06:31.314396 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:06:31.314413 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.459) 0:00:04.014 *********** 2025-07-06 20:06:31.314430 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-06 20:06:31.314448 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-06 20:06:31.314465 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-06 20:06:31.314482 | orchestrator | 2025-07-06 20:06:31.314498 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-06 20:06:31.314515 | orchestrator | 2025-07-06 20:06:31.314533 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-06 20:06:31.314550 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.531) 0:00:04.545 *********** 2025-07-06 20:06:31.314569 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:06:31.314586 | orchestrator | 2025-07-06 20:06:31.314604 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-06 20:06:31.314621 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.906) 0:00:05.452 *********** 2025-07-06 20:06:31.314637 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:31.314654 | orchestrator | 2025-07-06 20:06:31.314672 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-06 20:06:31.314690 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:00.969) 0:00:06.430 *********** 2025-07-06 20:06:31.314709 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.314763 | orchestrator | 2025-07-06 20:06:31.314780 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-06 20:06:31.314818 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.727) 0:00:07.158 *********** 2025-07-06 20:06:31.314834 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.314850 | orchestrator | 2025-07-06 20:06:31.314865 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-06 20:06:31.314881 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.338) 0:00:07.497 *********** 2025-07-06 20:06:31.314898 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.314913 | orchestrator | 2025-07-06 20:06:31.314929 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-06 20:06:31.314945 | orchestrator | Sunday 06 July 2025 20:04:22 +0000 (0:00:00.390) 0:00:07.888 *********** 2025-07-06 20:06:31.314960 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.314977 | orchestrator | 2025-07-06 20:06:31.314992 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-06 20:06:31.315009 | orchestrator | Sunday 06 July 2025 20:04:22 +0000 (0:00:00.649) 0:00:08.538 *********** 2025-07-06 20:06:31.315024 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:06:31.315040 | orchestrator | 2025-07-06 20:06:31.315065 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-06 20:06:31.315095 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.763) 0:00:09.302 *********** 2025-07-06 20:06:31.315111 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:31.315127 | orchestrator | 2025-07-06 20:06:31.315143 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-06 20:06:31.315158 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:00.821) 0:00:10.124 *********** 2025-07-06 20:06:31.315174 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.315190 | orchestrator | 2025-07-06 20:06:31.315205 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-06 20:06:31.315221 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:00.351) 0:00:10.475 *********** 2025-07-06 20:06:31.315236 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.315252 | orchestrator | 2025-07-06 20:06:31.315268 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-06 20:06:31.315284 | orchestrator | Sunday 06 July 2025 20:04:25 +0000 (0:00:00.374) 0:00:10.850 *********** 2025-07-06 20:06:31.315307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.315330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.315360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.315379 | orchestrator | 2025-07-06 20:06:31.315395 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-06 20:06:31.315418 | orchestrator | Sunday 06 July 2025 20:04:26 +0000 (0:00:00.998) 0:00:11.849 *********** 2025-07-06 20:06:31.315446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.315464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.315489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.315507 | orchestrator | 2025-07-06 20:06:31.315524 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-06 20:06:31.315540 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:02.167) 0:00:14.016 *********** 2025-07-06 20:06:31.315555 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-06 20:06:31.315571 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-06 20:06:31.315587 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-06 20:06:31.315603 | orchestrator | 2025-07-06 20:06:31.315619 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-06 20:06:31.315634 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:01.794) 0:00:15.810 *********** 2025-07-06 20:06:31.315651 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-06 20:06:31.315666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-06 20:06:31.315688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-06 20:06:31.315704 | orchestrator | 2025-07-06 20:06:31.315751 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-06 20:06:31.315768 | orchestrator | Sunday 06 July 2025 20:04:32 +0000 (0:00:02.914) 0:00:18.725 *********** 2025-07-06 20:06:31.315786 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-06 20:06:31.315803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-06 20:06:31.315818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-06 20:06:31.315834 | orchestrator | 2025-07-06 20:06:31.315851 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-06 20:06:31.315867 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:01.935) 0:00:20.661 *********** 2025-07-06 20:06:31.315883 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-06 20:06:31.315899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-06 20:06:31.315915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-06 20:06:31.315931 | orchestrator | 2025-07-06 20:06:31.315947 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-06 20:06:31.315963 | orchestrator | Sunday 06 July 2025 20:04:36 +0000 (0:00:01.992) 0:00:22.653 *********** 2025-07-06 20:06:31.315978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-06 20:06:31.315993 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-06 20:06:31.316015 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-06 20:06:31.316029 | orchestrator | 2025-07-06 20:06:31.316042 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-06 20:06:31.316055 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:01.443) 0:00:24.096 *********** 2025-07-06 20:06:31.316068 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-06 20:06:31.316080 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-06 20:06:31.316094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-06 20:06:31.316106 | orchestrator | 2025-07-06 20:06:31.316120 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-06 20:06:31.316133 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:01.404) 0:00:25.501 *********** 2025-07-06 20:06:31.316145 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.316157 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:31.316169 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:31.316182 | orchestrator | 2025-07-06 20:06:31.316194 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-06 20:06:31.316207 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.288) 0:00:25.790 *********** 2025-07-06 20:06:31.316220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.316248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.316263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:06:31.316283 | orchestrator | 2025-07-06 20:06:31.316296 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-06 20:06:31.316309 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:01.115) 0:00:26.905 *********** 2025-07-06 20:06:31.316322 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:31.316334 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:31.316347 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:31.316359 | orchestrator | 2025-07-06 20:06:31.316372 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-06 20:06:31.316384 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.771) 0:00:27.676 *********** 2025-07-06 20:06:31.316398 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:31.316411 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:31.316424 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:31.316437 | orchestrator | 2025-07-06 20:06:31.316450 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-06 20:06:31.316462 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:07.252) 0:00:34.929 *********** 2025-07-06 20:06:31.316475 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:31.316487 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:31.316499 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:31.316511 | orchestrator | 2025-07-06 20:06:31.316523 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-06 20:06:31.316535 | orchestrator | 2025-07-06 20:06:31.316549 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-06 20:06:31.316561 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:00.467) 0:00:35.397 *********** 2025-07-06 20:06:31.316574 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:31.316586 | orchestrator | 2025-07-06 20:06:31.316599 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-06 20:06:31.316611 | orchestrator | Sunday 06 July 2025 20:04:50 +0000 (0:00:00.639) 0:00:36.037 *********** 2025-07-06 20:06:31.316624 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:31.316637 | orchestrator | 2025-07-06 20:06:31.316650 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-06 20:06:31.316663 | orchestrator | Sunday 06 July 2025 20:04:50 +0000 (0:00:00.254) 0:00:36.292 *********** 2025-07-06 20:06:31.316676 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:31.316689 | orchestrator | 2025-07-06 20:06:31.316701 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-06 20:06:31.316733 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:01.681) 0:00:37.973 *********** 2025-07-06 20:06:31.316748 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:31.316761 | orchestrator | 2025-07-06 20:06:31.316774 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-06 20:06:31.316788 | orchestrator | 2025-07-06 20:06:31.316801 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-06 20:06:31.316814 | orchestrator | Sunday 06 July 2025 20:05:46 +0000 (0:00:54.427) 0:01:32.400 *********** 2025-07-06 20:06:31.316828 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:31.316840 | orchestrator | 2025-07-06 20:06:31.316852 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-06 20:06:31.316865 | orchestrator | Sunday 06 July 2025 20:05:47 +0000 (0:00:00.682) 0:01:33.083 *********** 2025-07-06 20:06:31.316887 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:31.316900 | orchestrator | 2025-07-06 20:06:31.316912 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-06 20:06:31.316925 | orchestrator | Sunday 06 July 2025 20:05:47 +0000 (0:00:00.475) 0:01:33.559 *********** 2025-07-06 20:06:31.316938 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:31.316951 | orchestrator | 2025-07-06 20:06:31.316963 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-06 20:06:31.316976 | orchestrator | Sunday 06 July 2025 20:05:49 +0000 (0:00:01.817) 0:01:35.376 *********** 2025-07-06 20:06:31.316988 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:31.317000 | orchestrator | 2025-07-06 20:06:31.317013 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-06 20:06:31.317026 | orchestrator | 2025-07-06 20:06:31.317039 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-06 20:06:31.317059 | orchestrator | Sunday 06 July 2025 20:06:06 +0000 (0:00:16.634) 0:01:52.011 *********** 2025-07-06 20:06:31.317072 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:31.317085 | orchestrator | 2025-07-06 20:06:31.317098 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-06 20:06:31.317111 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:00.772) 0:01:52.784 *********** 2025-07-06 20:06:31.317123 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:31.317135 | orchestrator | 2025-07-06 20:06:31.317149 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-06 20:06:31.317162 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:00.504) 0:01:53.289 *********** 2025-07-06 20:06:31.317174 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:31.317186 | orchestrator | 2025-07-06 20:06:31.317199 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-06 20:06:31.317212 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:07.221) 0:02:00.510 *********** 2025-07-06 20:06:31.317225 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:31.317237 | orchestrator | 2025-07-06 20:06:31.317250 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-06 20:06:31.317263 | orchestrator | 2025-07-06 20:06:31.317277 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-06 20:06:31.317290 | orchestrator | Sunday 06 July 2025 20:06:25 +0000 (0:00:10.978) 0:02:11.488 *********** 2025-07-06 20:06:31.317303 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:06:31.317316 | orchestrator | 2025-07-06 20:06:31.317329 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-06 20:06:31.317343 | orchestrator | Sunday 06 July 2025 20:06:26 +0000 (0:00:01.166) 0:02:12.655 *********** 2025-07-06 20:06:31.317357 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-06 20:06:31.317371 | orchestrator | enable_outward_rabbitmq_True 2025-07-06 20:06:31.317384 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-06 20:06:31.317397 | orchestrator | outward_rabbitmq_restart 2025-07-06 20:06:31.317410 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:31.317423 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:31.317435 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:31.317448 | orchestrator | 2025-07-06 20:06:31.317544 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-06 20:06:31.317570 | orchestrator | skipping: no hosts matched 2025-07-06 20:06:31.317582 | orchestrator | 2025-07-06 20:06:31.317596 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-06 20:06:31.317609 | orchestrator | skipping: no hosts matched 2025-07-06 20:06:31.317622 | orchestrator | 2025-07-06 20:06:31.317635 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-06 20:06:31.317647 | orchestrator | skipping: no hosts matched 2025-07-06 20:06:31.317660 | orchestrator | 2025-07-06 20:06:31.317673 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:31.317695 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-06 20:06:31.317709 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:06:31.317742 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:06:31.317757 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:06:31.317770 | orchestrator | 2025-07-06 20:06:31.317783 | orchestrator | 2025-07-06 20:06:31.317797 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:31.317810 | orchestrator | Sunday 06 July 2025 20:06:29 +0000 (0:00:03.011) 0:02:15.667 *********** 2025-07-06 20:06:31.317822 | orchestrator | =============================================================================== 2025-07-06 20:06:31.317836 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.04s 2025-07-06 20:06:31.317849 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.72s 2025-07-06 20:06:31.317862 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.25s 2025-07-06 20:06:31.317874 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.13s 2025-07-06 20:06:31.317887 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.01s 2025-07-06 20:06:31.317901 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.91s 2025-07-06 20:06:31.317913 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.17s 2025-07-06 20:06:31.317926 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.10s 2025-07-06 20:06:31.317939 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.99s 2025-07-06 20:06:31.317952 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.94s 2025-07-06 20:06:31.317965 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.79s 2025-07-06 20:06:31.317978 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.44s 2025-07-06 20:06:31.317991 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2025-07-06 20:06:31.318005 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.23s 2025-07-06 20:06:31.318087 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.17s 2025-07-06 20:06:31.318115 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.12s 2025-07-06 20:06:31.318130 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.00s 2025-07-06 20:06:31.318143 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2025-07-06 20:06:31.318158 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.91s 2025-07-06 20:06:31.318170 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.82s 2025-07-06 20:06:31.318184 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:31.318199 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:31.318213 | orchestrator | 2025-07-06 20:06:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:34.341912 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:34.342008 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:34.345662 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:34.345752 | orchestrator | 2025-07-06 20:06:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:37.372922 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:37.373009 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:37.375141 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:37.375176 | orchestrator | 2025-07-06 20:06:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:40.411023 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:40.411126 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:40.411493 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:40.411600 | orchestrator | 2025-07-06 20:06:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:43.455824 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:43.457770 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:43.461082 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:43.465493 | orchestrator | 2025-07-06 20:06:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:46.500753 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:46.503239 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:46.505629 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:46.505787 | orchestrator | 2025-07-06 20:06:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:49.550434 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:49.550740 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:49.551910 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:49.552081 | orchestrator | 2025-07-06 20:06:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:52.595930 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:52.596053 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:52.596070 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:52.596146 | orchestrator | 2025-07-06 20:06:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:55.635017 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:55.635668 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:55.637193 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:55.637323 | orchestrator | 2025-07-06 20:06:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:58.666372 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:06:58.667347 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:06:58.669450 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:06:58.670059 | orchestrator | 2025-07-06 20:06:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:01.712424 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:01.714207 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:01.716976 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:01.717203 | orchestrator | 2025-07-06 20:07:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:04.746098 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:04.746239 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:04.746658 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:04.746724 | orchestrator | 2025-07-06 20:07:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:07.772531 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:07.772805 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:07.773439 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:07.773464 | orchestrator | 2025-07-06 20:07:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:10.809234 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:10.814525 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:10.816750 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:10.817048 | orchestrator | 2025-07-06 20:07:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:13.858439 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:13.858546 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:13.861386 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:13.861446 | orchestrator | 2025-07-06 20:07:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:16.908378 | orchestrator | 2025-07-06 20:07:16 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:16.909045 | orchestrator | 2025-07-06 20:07:16 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:16.911154 | orchestrator | 2025-07-06 20:07:16 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:16.911312 | orchestrator | 2025-07-06 20:07:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:19.961107 | orchestrator | 2025-07-06 20:07:19 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:19.963886 | orchestrator | 2025-07-06 20:07:19 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:19.966012 | orchestrator | 2025-07-06 20:07:19 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:19.966090 | orchestrator | 2025-07-06 20:07:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:23.019413 | orchestrator | 2025-07-06 20:07:23 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:23.019510 | orchestrator | 2025-07-06 20:07:23 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:23.019525 | orchestrator | 2025-07-06 20:07:23 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:23.019603 | orchestrator | 2025-07-06 20:07:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:26.071732 | orchestrator | 2025-07-06 20:07:26 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:26.071864 | orchestrator | 2025-07-06 20:07:26 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:26.071915 | orchestrator | 2025-07-06 20:07:26 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:26.071934 | orchestrator | 2025-07-06 20:07:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:29.110788 | orchestrator | 2025-07-06 20:07:29 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:29.113190 | orchestrator | 2025-07-06 20:07:29 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:29.115855 | orchestrator | 2025-07-06 20:07:29 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:29.115886 | orchestrator | 2025-07-06 20:07:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:32.163815 | orchestrator | 2025-07-06 20:07:32 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:32.166535 | orchestrator | 2025-07-06 20:07:32 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:32.170528 | orchestrator | 2025-07-06 20:07:32 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:32.170665 | orchestrator | 2025-07-06 20:07:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:35.209932 | orchestrator | 2025-07-06 20:07:35 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:35.211454 | orchestrator | 2025-07-06 20:07:35 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:35.213153 | orchestrator | 2025-07-06 20:07:35 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:35.213179 | orchestrator | 2025-07-06 20:07:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:38.266737 | orchestrator | 2025-07-06 20:07:38 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:38.268390 | orchestrator | 2025-07-06 20:07:38 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:38.269115 | orchestrator | 2025-07-06 20:07:38 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state STARTED 2025-07-06 20:07:38.269158 | orchestrator | 2025-07-06 20:07:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:41.321125 | orchestrator | 2025-07-06 20:07:41 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:41.322796 | orchestrator | 2025-07-06 20:07:41 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:41.326218 | orchestrator | 2025-07-06 20:07:41 | INFO  | Task 0a66c791-6314-4230-ac6a-15b4283acf0f is in state SUCCESS 2025-07-06 20:07:41.326252 | orchestrator | 2025-07-06 20:07:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:41.328551 | orchestrator | 2025-07-06 20:07:41.328596 | orchestrator | 2025-07-06 20:07:41.328609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:07:41.328652 | orchestrator | 2025-07-06 20:07:41.328664 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:07:41.328675 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-07-06 20:07:41.328687 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:41.328699 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:41.328710 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:41.328721 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.328732 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.328743 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.328755 | orchestrator | 2025-07-06 20:07:41.328767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:07:41.328778 | orchestrator | Sunday 06 July 2025 20:05:09 +0000 (0:00:00.717) 0:00:00.879 *********** 2025-07-06 20:07:41.328790 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-06 20:07:41.328801 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-06 20:07:41.328812 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-06 20:07:41.328823 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-06 20:07:41.328835 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-06 20:07:41.328846 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-06 20:07:41.328857 | orchestrator | 2025-07-06 20:07:41.328877 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-06 20:07:41.328888 | orchestrator | 2025-07-06 20:07:41.328900 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-06 20:07:41.328911 | orchestrator | Sunday 06 July 2025 20:05:10 +0000 (0:00:01.495) 0:00:02.375 *********** 2025-07-06 20:07:41.328924 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:07:41.328937 | orchestrator | 2025-07-06 20:07:41.328948 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-06 20:07:41.328959 | orchestrator | Sunday 06 July 2025 20:05:11 +0000 (0:00:01.164) 0:00:03.539 *********** 2025-07-06 20:07:41.328973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329129 | orchestrator | 2025-07-06 20:07:41.329142 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-06 20:07:41.329156 | orchestrator | Sunday 06 July 2025 20:05:12 +0000 (0:00:01.139) 0:00:04.679 *********** 2025-07-06 20:07:41.329170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329261 | orchestrator | 2025-07-06 20:07:41.329274 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-06 20:07:41.329286 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:01.499) 0:00:06.178 *********** 2025-07-06 20:07:41.329300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329389 | orchestrator | 2025-07-06 20:07:41.329402 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-06 20:07:41.329414 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:01.147) 0:00:07.326 *********** 2025-07-06 20:07:41.329428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329516 | orchestrator | 2025-07-06 20:07:41.329527 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-06 20:07:41.329538 | orchestrator | Sunday 06 July 2025 20:05:16 +0000 (0:00:01.536) 0:00:08.863 *********** 2025-07-06 20:07:41.329554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.329652 | orchestrator | 2025-07-06 20:07:41.329663 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-06 20:07:41.329674 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:01.357) 0:00:10.220 *********** 2025-07-06 20:07:41.329686 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:41.329697 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:41.329708 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:41.329719 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.329730 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.329740 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.329751 | orchestrator | 2025-07-06 20:07:41.329762 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-06 20:07:41.329773 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:02.532) 0:00:12.753 *********** 2025-07-06 20:07:41.329784 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-06 20:07:41.329796 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-06 20:07:41.329807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-06 20:07:41.329823 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-06 20:07:41.329834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-06 20:07:41.329845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-06 20:07:41.329856 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:07:41.329867 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:07:41.329878 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:07:41.329892 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:07:41.329910 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:07:41.329935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:07:41.329947 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:07:41.329964 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:07:41.329976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:07:41.329987 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:07:41.329998 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:07:41.330009 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:07:41.330113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:07:41.330135 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:07:41.330154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:07:41.330184 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:07:41.330201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:07:41.330220 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:07:41.330238 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:07:41.330258 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:07:41.330277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:07:41.330297 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:07:41.330317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:07:41.330338 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:07:41.330357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:07:41.330377 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:07:41.330396 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:07:41.330417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:07:41.330437 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:07:41.330456 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-06 20:07:41.330476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:07:41.330495 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-06 20:07:41.330515 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-06 20:07:41.330535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-06 20:07:41.330582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-06 20:07:41.330602 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-06 20:07:41.330661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-06 20:07:41.330681 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-06 20:07:41.330700 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-06 20:07:41.330720 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-06 20:07:41.330739 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-06 20:07:41.330758 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-06 20:07:41.330785 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-06 20:07:41.330804 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-06 20:07:41.330822 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-06 20:07:41.330842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-06 20:07:41.330862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-06 20:07:41.330882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-06 20:07:41.330900 | orchestrator | 2025-07-06 20:07:41.330919 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:07:41.330937 | orchestrator | Sunday 06 July 2025 20:05:40 +0000 (0:00:19.911) 0:00:32.664 *********** 2025-07-06 20:07:41.330955 | orchestrator | 2025-07-06 20:07:41.330974 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:07:41.330993 | orchestrator | Sunday 06 July 2025 20:05:40 +0000 (0:00:00.070) 0:00:32.735 *********** 2025-07-06 20:07:41.331011 | orchestrator | 2025-07-06 20:07:41.331030 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:07:41.331048 | orchestrator | Sunday 06 July 2025 20:05:40 +0000 (0:00:00.071) 0:00:32.806 *********** 2025-07-06 20:07:41.331066 | orchestrator | 2025-07-06 20:07:41.331085 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:07:41.331103 | orchestrator | Sunday 06 July 2025 20:05:41 +0000 (0:00:00.073) 0:00:32.880 *********** 2025-07-06 20:07:41.331121 | orchestrator | 2025-07-06 20:07:41.331139 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:07:41.331158 | orchestrator | Sunday 06 July 2025 20:05:41 +0000 (0:00:00.076) 0:00:32.956 *********** 2025-07-06 20:07:41.331176 | orchestrator | 2025-07-06 20:07:41.331195 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:07:41.331215 | orchestrator | Sunday 06 July 2025 20:05:41 +0000 (0:00:00.070) 0:00:33.027 *********** 2025-07-06 20:07:41.331232 | orchestrator | 2025-07-06 20:07:41.331250 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-06 20:07:41.331269 | orchestrator | Sunday 06 July 2025 20:05:41 +0000 (0:00:00.068) 0:00:33.095 *********** 2025-07-06 20:07:41.331286 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:41.331304 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:41.331335 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:41.331354 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.331372 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.331389 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.331407 | orchestrator | 2025-07-06 20:07:41.331426 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-06 20:07:41.331444 | orchestrator | Sunday 06 July 2025 20:05:43 +0000 (0:00:02.123) 0:00:35.218 *********** 2025-07-06 20:07:41.331461 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.331473 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:41.331483 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:41.331494 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.331505 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.331516 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:41.331526 | orchestrator | 2025-07-06 20:07:41.331537 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-06 20:07:41.331548 | orchestrator | 2025-07-06 20:07:41.331559 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-06 20:07:41.331569 | orchestrator | Sunday 06 July 2025 20:06:24 +0000 (0:00:41.529) 0:01:16.748 *********** 2025-07-06 20:07:41.331580 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:07:41.331591 | orchestrator | 2025-07-06 20:07:41.331602 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-06 20:07:41.331613 | orchestrator | Sunday 06 July 2025 20:06:25 +0000 (0:00:00.548) 0:01:17.297 *********** 2025-07-06 20:07:41.331659 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:07:41.331671 | orchestrator | 2025-07-06 20:07:41.331691 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-06 20:07:41.331702 | orchestrator | Sunday 06 July 2025 20:06:26 +0000 (0:00:00.764) 0:01:18.061 *********** 2025-07-06 20:07:41.331713 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.331724 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.331735 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.331746 | orchestrator | 2025-07-06 20:07:41.331757 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-06 20:07:41.331768 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:01.035) 0:01:19.096 *********** 2025-07-06 20:07:41.331778 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.331789 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.331800 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.331811 | orchestrator | 2025-07-06 20:07:41.331821 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-06 20:07:41.331832 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:00.317) 0:01:19.414 *********** 2025-07-06 20:07:41.331843 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.331854 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.331864 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.331875 | orchestrator | 2025-07-06 20:07:41.331886 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-06 20:07:41.331897 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:00.276) 0:01:19.690 *********** 2025-07-06 20:07:41.331907 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.331925 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.331936 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.331947 | orchestrator | 2025-07-06 20:07:41.331958 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-06 20:07:41.331969 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:00.422) 0:01:20.113 *********** 2025-07-06 20:07:41.331980 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.331990 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.332001 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.332011 | orchestrator | 2025-07-06 20:07:41.332022 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-06 20:07:41.332046 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:00.281) 0:01:20.395 *********** 2025-07-06 20:07:41.332057 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332068 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332079 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332089 | orchestrator | 2025-07-06 20:07:41.332100 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-06 20:07:41.332111 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:00.254) 0:01:20.649 *********** 2025-07-06 20:07:41.332122 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332133 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332143 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332154 | orchestrator | 2025-07-06 20:07:41.332165 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-06 20:07:41.332176 | orchestrator | Sunday 06 July 2025 20:06:29 +0000 (0:00:00.267) 0:01:20.916 *********** 2025-07-06 20:07:41.332186 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332197 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332208 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332219 | orchestrator | 2025-07-06 20:07:41.332229 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-06 20:07:41.332240 | orchestrator | Sunday 06 July 2025 20:06:29 +0000 (0:00:00.410) 0:01:21.327 *********** 2025-07-06 20:07:41.332251 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332262 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332272 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332283 | orchestrator | 2025-07-06 20:07:41.332294 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-06 20:07:41.332305 | orchestrator | Sunday 06 July 2025 20:06:29 +0000 (0:00:00.267) 0:01:21.595 *********** 2025-07-06 20:07:41.332316 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332327 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332337 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332348 | orchestrator | 2025-07-06 20:07:41.332359 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-06 20:07:41.332370 | orchestrator | Sunday 06 July 2025 20:06:29 +0000 (0:00:00.255) 0:01:21.850 *********** 2025-07-06 20:07:41.332380 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332391 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332402 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332412 | orchestrator | 2025-07-06 20:07:41.332423 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-06 20:07:41.332434 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:00.289) 0:01:22.140 *********** 2025-07-06 20:07:41.332444 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332460 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332478 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332489 | orchestrator | 2025-07-06 20:07:41.332500 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-06 20:07:41.332511 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:00.387) 0:01:22.527 *********** 2025-07-06 20:07:41.332522 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332532 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332543 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332554 | orchestrator | 2025-07-06 20:07:41.332565 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-06 20:07:41.332575 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:00.241) 0:01:22.768 *********** 2025-07-06 20:07:41.332586 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332597 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332608 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332646 | orchestrator | 2025-07-06 20:07:41.332657 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-06 20:07:41.332675 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.249) 0:01:23.018 *********** 2025-07-06 20:07:41.332686 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332697 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332708 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332718 | orchestrator | 2025-07-06 20:07:41.332737 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-06 20:07:41.332748 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.257) 0:01:23.275 *********** 2025-07-06 20:07:41.332759 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332770 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332780 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332791 | orchestrator | 2025-07-06 20:07:41.332802 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-06 20:07:41.332813 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.365) 0:01:23.641 *********** 2025-07-06 20:07:41.332824 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.332834 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.332845 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.332856 | orchestrator | 2025-07-06 20:07:41.332867 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-06 20:07:41.332878 | orchestrator | Sunday 06 July 2025 20:06:32 +0000 (0:00:00.247) 0:01:23.888 *********** 2025-07-06 20:07:41.332889 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:07:41.332900 | orchestrator | 2025-07-06 20:07:41.332911 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-06 20:07:41.332928 | orchestrator | Sunday 06 July 2025 20:06:32 +0000 (0:00:00.523) 0:01:24.412 *********** 2025-07-06 20:07:41.332954 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.332986 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.333006 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.333024 | orchestrator | 2025-07-06 20:07:41.333043 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-06 20:07:41.333060 | orchestrator | Sunday 06 July 2025 20:06:33 +0000 (0:00:01.027) 0:01:25.439 *********** 2025-07-06 20:07:41.333077 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.333097 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.333116 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.333136 | orchestrator | 2025-07-06 20:07:41.333154 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-06 20:07:41.333176 | orchestrator | Sunday 06 July 2025 20:06:34 +0000 (0:00:00.690) 0:01:26.130 *********** 2025-07-06 20:07:41.333196 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.333216 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.333231 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.333242 | orchestrator | 2025-07-06 20:07:41.333253 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-06 20:07:41.333264 | orchestrator | Sunday 06 July 2025 20:06:34 +0000 (0:00:00.312) 0:01:26.442 *********** 2025-07-06 20:07:41.333275 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.333285 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.333296 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.333307 | orchestrator | 2025-07-06 20:07:41.333318 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-06 20:07:41.333328 | orchestrator | Sunday 06 July 2025 20:06:34 +0000 (0:00:00.301) 0:01:26.744 *********** 2025-07-06 20:07:41.333339 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.333350 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.333361 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.333371 | orchestrator | 2025-07-06 20:07:41.333382 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-06 20:07:41.333393 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.435) 0:01:27.180 *********** 2025-07-06 20:07:41.333416 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.333427 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.333438 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.333449 | orchestrator | 2025-07-06 20:07:41.333459 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-06 20:07:41.333470 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.318) 0:01:27.499 *********** 2025-07-06 20:07:41.333481 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.333492 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.333502 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.333513 | orchestrator | 2025-07-06 20:07:41.333524 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-06 20:07:41.333534 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.274) 0:01:27.773 *********** 2025-07-06 20:07:41.333545 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.333556 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.333566 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.333577 | orchestrator | 2025-07-06 20:07:41.333588 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-06 20:07:41.333598 | orchestrator | Sunday 06 July 2025 20:06:36 +0000 (0:00:00.269) 0:01:28.043 *********** 2025-07-06 20:07:41.333611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333844 | orchestrator | 2025-07-06 20:07:41.333861 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-06 20:07:41.333879 | orchestrator | Sunday 06 July 2025 20:06:37 +0000 (0:00:01.383) 0:01:29.426 *********** 2025-07-06 20:07:41.333896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.333982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334153 | orchestrator | 2025-07-06 20:07:41.334173 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-06 20:07:41.334192 | orchestrator | Sunday 06 July 2025 20:06:41 +0000 (0:00:03.772) 0:01:33.198 *********** 2025-07-06 20:07:41.334213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.334426 | orchestrator | 2025-07-06 20:07:41.334446 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:07:41.334465 | orchestrator | Sunday 06 July 2025 20:06:43 +0000 (0:00:01.977) 0:01:35.176 *********** 2025-07-06 20:07:41.334485 | orchestrator | 2025-07-06 20:07:41.334504 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:07:41.334524 | orchestrator | Sunday 06 July 2025 20:06:43 +0000 (0:00:00.071) 0:01:35.247 *********** 2025-07-06 20:07:41.334545 | orchestrator | 2025-07-06 20:07:41.334564 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:07:41.334584 | orchestrator | Sunday 06 July 2025 20:06:43 +0000 (0:00:00.057) 0:01:35.305 *********** 2025-07-06 20:07:41.334603 | orchestrator | 2025-07-06 20:07:41.334692 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-06 20:07:41.334713 | orchestrator | Sunday 06 July 2025 20:06:43 +0000 (0:00:00.060) 0:01:35.365 *********** 2025-07-06 20:07:41.334732 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.334751 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.334769 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.334787 | orchestrator | 2025-07-06 20:07:41.334806 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-06 20:07:41.334825 | orchestrator | Sunday 06 July 2025 20:06:51 +0000 (0:00:07.600) 0:01:42.965 *********** 2025-07-06 20:07:41.334844 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.334863 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.334880 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.334898 | orchestrator | 2025-07-06 20:07:41.334917 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-06 20:07:41.334935 | orchestrator | Sunday 06 July 2025 20:06:57 +0000 (0:00:06.786) 0:01:49.752 *********** 2025-07-06 20:07:41.334952 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.334969 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.334987 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.335004 | orchestrator | 2025-07-06 20:07:41.335023 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-06 20:07:41.335042 | orchestrator | Sunday 06 July 2025 20:07:00 +0000 (0:00:02.438) 0:01:52.191 *********** 2025-07-06 20:07:41.335059 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.335077 | orchestrator | 2025-07-06 20:07:41.335095 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-06 20:07:41.335113 | orchestrator | Sunday 06 July 2025 20:07:00 +0000 (0:00:00.126) 0:01:52.318 *********** 2025-07-06 20:07:41.335143 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.335160 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.335175 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.335190 | orchestrator | 2025-07-06 20:07:41.335215 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-06 20:07:41.335231 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.715) 0:01:53.033 *********** 2025-07-06 20:07:41.335247 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.335263 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.335279 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.335294 | orchestrator | 2025-07-06 20:07:41.335309 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-06 20:07:41.335326 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.747) 0:01:53.781 *********** 2025-07-06 20:07:41.335342 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.335358 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.335373 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.335388 | orchestrator | 2025-07-06 20:07:41.335402 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-06 20:07:41.335418 | orchestrator | Sunday 06 July 2025 20:07:02 +0000 (0:00:00.715) 0:01:54.497 *********** 2025-07-06 20:07:41.335434 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.335451 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.335467 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.335482 | orchestrator | 2025-07-06 20:07:41.335499 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-06 20:07:41.335514 | orchestrator | Sunday 06 July 2025 20:07:03 +0000 (0:00:00.564) 0:01:55.061 *********** 2025-07-06 20:07:41.335531 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.335548 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.335564 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.335581 | orchestrator | 2025-07-06 20:07:41.335598 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-06 20:07:41.335638 | orchestrator | Sunday 06 July 2025 20:07:03 +0000 (0:00:00.767) 0:01:55.829 *********** 2025-07-06 20:07:41.335656 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.335673 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.335689 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.335706 | orchestrator | 2025-07-06 20:07:41.335722 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-06 20:07:41.335816 | orchestrator | Sunday 06 July 2025 20:07:04 +0000 (0:00:00.950) 0:01:56.780 *********** 2025-07-06 20:07:41.335843 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.335860 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.335877 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.335894 | orchestrator | 2025-07-06 20:07:41.335910 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-06 20:07:41.335927 | orchestrator | Sunday 06 July 2025 20:07:05 +0000 (0:00:00.292) 0:01:57.072 *********** 2025-07-06 20:07:41.335945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.335963 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.335981 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336012 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336032 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336049 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336079 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336097 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336120 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336136 | orchestrator | 2025-07-06 20:07:41.336152 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-06 20:07:41.336168 | orchestrator | Sunday 06 July 2025 20:07:06 +0000 (0:00:01.319) 0:01:58.392 *********** 2025-07-06 20:07:41.336185 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336202 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336245 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336368 | orchestrator | 2025-07-06 20:07:41.336391 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-06 20:07:41.336409 | orchestrator | Sunday 06 July 2025 20:07:10 +0000 (0:00:04.266) 0:02:02.658 *********** 2025-07-06 20:07:41.336426 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336442 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336459 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336573 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:07:41.336588 | orchestrator | 2025-07-06 20:07:41.336604 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:07:41.336640 | orchestrator | Sunday 06 July 2025 20:07:13 +0000 (0:00:03.023) 0:02:05.682 *********** 2025-07-06 20:07:41.336656 | orchestrator | 2025-07-06 20:07:41.336671 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:07:41.336693 | orchestrator | Sunday 06 July 2025 20:07:13 +0000 (0:00:00.063) 0:02:05.746 *********** 2025-07-06 20:07:41.336710 | orchestrator | 2025-07-06 20:07:41.336726 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:07:41.336742 | orchestrator | Sunday 06 July 2025 20:07:13 +0000 (0:00:00.063) 0:02:05.809 *********** 2025-07-06 20:07:41.336758 | orchestrator | 2025-07-06 20:07:41.336774 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-06 20:07:41.336790 | orchestrator | Sunday 06 July 2025 20:07:14 +0000 (0:00:00.062) 0:02:05.872 *********** 2025-07-06 20:07:41.336805 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.336822 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.336848 | orchestrator | 2025-07-06 20:07:41.336866 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-06 20:07:41.336883 | orchestrator | Sunday 06 July 2025 20:07:20 +0000 (0:00:06.291) 0:02:12.163 *********** 2025-07-06 20:07:41.336900 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.336918 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.336936 | orchestrator | 2025-07-06 20:07:41.336952 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-06 20:07:41.336969 | orchestrator | Sunday 06 July 2025 20:07:26 +0000 (0:00:06.145) 0:02:18.309 *********** 2025-07-06 20:07:41.336986 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:41.337003 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:41.337021 | orchestrator | 2025-07-06 20:07:41.337039 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-06 20:07:41.337057 | orchestrator | Sunday 06 July 2025 20:07:32 +0000 (0:00:06.440) 0:02:24.749 *********** 2025-07-06 20:07:41.337074 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:07:41.337092 | orchestrator | 2025-07-06 20:07:41.337108 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-06 20:07:41.337125 | orchestrator | Sunday 06 July 2025 20:07:33 +0000 (0:00:00.124) 0:02:24.874 *********** 2025-07-06 20:07:41.337141 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.337158 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.337175 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.337192 | orchestrator | 2025-07-06 20:07:41.337209 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-06 20:07:41.337226 | orchestrator | Sunday 06 July 2025 20:07:34 +0000 (0:00:01.125) 0:02:26.000 *********** 2025-07-06 20:07:41.337242 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.337258 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.337273 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.337287 | orchestrator | 2025-07-06 20:07:41.337302 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-06 20:07:41.337317 | orchestrator | Sunday 06 July 2025 20:07:34 +0000 (0:00:00.767) 0:02:26.767 *********** 2025-07-06 20:07:41.337334 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.337351 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.337367 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.337383 | orchestrator | 2025-07-06 20:07:41.337399 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-06 20:07:41.337416 | orchestrator | Sunday 06 July 2025 20:07:35 +0000 (0:00:01.019) 0:02:27.787 *********** 2025-07-06 20:07:41.337433 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:07:41.337449 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:07:41.337466 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:41.337482 | orchestrator | 2025-07-06 20:07:41.337499 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-06 20:07:41.337515 | orchestrator | Sunday 06 July 2025 20:07:36 +0000 (0:00:00.739) 0:02:28.526 *********** 2025-07-06 20:07:41.337531 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.337547 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.337564 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.337582 | orchestrator | 2025-07-06 20:07:41.337598 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-06 20:07:41.337695 | orchestrator | Sunday 06 July 2025 20:07:37 +0000 (0:00:01.016) 0:02:29.543 *********** 2025-07-06 20:07:41.337717 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:41.337734 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:41.337751 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:41.337767 | orchestrator | 2025-07-06 20:07:41.337783 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:07:41.337801 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-06 20:07:41.337828 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-06 20:07:41.337854 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-06 20:07:41.337868 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:41.337882 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:41.337895 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:41.337909 | orchestrator | 2025-07-06 20:07:41.337923 | orchestrator | 2025-07-06 20:07:41.337936 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:07:41.337950 | orchestrator | Sunday 06 July 2025 20:07:38 +0000 (0:00:01.077) 0:02:30.620 *********** 2025-07-06 20:07:41.337963 | orchestrator | =============================================================================== 2025-07-06 20:07:41.337977 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 41.53s 2025-07-06 20:07:41.337998 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.91s 2025-07-06 20:07:41.338012 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.89s 2025-07-06 20:07:41.338090 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.93s 2025-07-06 20:07:41.338106 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.88s 2025-07-06 20:07:41.338119 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.27s 2025-07-06 20:07:41.338132 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.77s 2025-07-06 20:07:41.338146 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.02s 2025-07-06 20:07:41.338160 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.53s 2025-07-06 20:07:41.338173 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.12s 2025-07-06 20:07:41.338188 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.98s 2025-07-06 20:07:41.338202 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2025-07-06 20:07:41.338216 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.50s 2025-07-06 20:07:41.338230 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.50s 2025-07-06 20:07:41.338244 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-07-06 20:07:41.338257 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.36s 2025-07-06 20:07:41.338272 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.32s 2025-07-06 20:07:41.338287 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.16s 2025-07-06 20:07:41.338301 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.15s 2025-07-06 20:07:41.338315 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.14s 2025-07-06 20:07:44.372277 | orchestrator | 2025-07-06 20:07:44 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:44.374182 | orchestrator | 2025-07-06 20:07:44 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:44.374215 | orchestrator | 2025-07-06 20:07:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:47.441457 | orchestrator | 2025-07-06 20:07:47 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:47.443105 | orchestrator | 2025-07-06 20:07:47 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:47.443194 | orchestrator | 2025-07-06 20:07:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:50.494677 | orchestrator | 2025-07-06 20:07:50 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:50.494998 | orchestrator | 2025-07-06 20:07:50 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:50.495019 | orchestrator | 2025-07-06 20:07:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:53.556107 | orchestrator | 2025-07-06 20:07:53 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:53.557501 | orchestrator | 2025-07-06 20:07:53 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:53.557537 | orchestrator | 2025-07-06 20:07:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:56.609417 | orchestrator | 2025-07-06 20:07:56 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:56.611853 | orchestrator | 2025-07-06 20:07:56 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:56.612361 | orchestrator | 2025-07-06 20:07:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:59.663358 | orchestrator | 2025-07-06 20:07:59 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:07:59.663453 | orchestrator | 2025-07-06 20:07:59 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:07:59.666271 | orchestrator | 2025-07-06 20:07:59 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state STARTED 2025-07-06 20:07:59.666497 | orchestrator | 2025-07-06 20:07:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:02.732372 | orchestrator | 2025-07-06 20:08:02 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:02.732834 | orchestrator | 2025-07-06 20:08:02 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:02.733694 | orchestrator | 2025-07-06 20:08:02 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state STARTED 2025-07-06 20:08:02.733868 | orchestrator | 2025-07-06 20:08:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:05.775094 | orchestrator | 2025-07-06 20:08:05 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:05.775196 | orchestrator | 2025-07-06 20:08:05 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:05.777290 | orchestrator | 2025-07-06 20:08:05 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state STARTED 2025-07-06 20:08:05.777380 | orchestrator | 2025-07-06 20:08:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:08.825059 | orchestrator | 2025-07-06 20:08:08 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:08.825395 | orchestrator | 2025-07-06 20:08:08 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:08.826344 | orchestrator | 2025-07-06 20:08:08 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state STARTED 2025-07-06 20:08:08.826379 | orchestrator | 2025-07-06 20:08:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:11.879872 | orchestrator | 2025-07-06 20:08:11 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:11.880726 | orchestrator | 2025-07-06 20:08:11 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:11.881062 | orchestrator | 2025-07-06 20:08:11 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state STARTED 2025-07-06 20:08:11.881226 | orchestrator | 2025-07-06 20:08:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:14.930303 | orchestrator | 2025-07-06 20:08:14 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:14.931024 | orchestrator | 2025-07-06 20:08:14 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:14.932372 | orchestrator | 2025-07-06 20:08:14 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state STARTED 2025-07-06 20:08:14.932529 | orchestrator | 2025-07-06 20:08:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:17.976882 | orchestrator | 2025-07-06 20:08:17 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:17.978561 | orchestrator | 2025-07-06 20:08:17 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:17.979548 | orchestrator | 2025-07-06 20:08:17 | INFO  | Task 1c6b7699-4135-40ac-a36e-7ab2e52803a6 is in state SUCCESS 2025-07-06 20:08:17.979919 | orchestrator | 2025-07-06 20:08:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:21.038382 | orchestrator | 2025-07-06 20:08:21 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:21.041445 | orchestrator | 2025-07-06 20:08:21 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:21.041732 | orchestrator | 2025-07-06 20:08:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:24.091897 | orchestrator | 2025-07-06 20:08:24 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:24.092432 | orchestrator | 2025-07-06 20:08:24 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:24.092460 | orchestrator | 2025-07-06 20:08:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:27.152174 | orchestrator | 2025-07-06 20:08:27 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:27.154452 | orchestrator | 2025-07-06 20:08:27 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:27.154766 | orchestrator | 2025-07-06 20:08:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:30.203148 | orchestrator | 2025-07-06 20:08:30 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:30.204898 | orchestrator | 2025-07-06 20:08:30 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:30.204929 | orchestrator | 2025-07-06 20:08:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:33.251732 | orchestrator | 2025-07-06 20:08:33 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:33.252991 | orchestrator | 2025-07-06 20:08:33 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:33.253183 | orchestrator | 2025-07-06 20:08:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:36.306951 | orchestrator | 2025-07-06 20:08:36 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:36.308967 | orchestrator | 2025-07-06 20:08:36 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:36.309041 | orchestrator | 2025-07-06 20:08:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:39.358171 | orchestrator | 2025-07-06 20:08:39 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:39.358370 | orchestrator | 2025-07-06 20:08:39 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:39.358391 | orchestrator | 2025-07-06 20:08:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:42.407185 | orchestrator | 2025-07-06 20:08:42 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:42.408656 | orchestrator | 2025-07-06 20:08:42 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:42.408710 | orchestrator | 2025-07-06 20:08:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:45.463951 | orchestrator | 2025-07-06 20:08:45 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:45.467926 | orchestrator | 2025-07-06 20:08:45 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:45.468674 | orchestrator | 2025-07-06 20:08:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:48.517424 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:48.518065 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:48.518109 | orchestrator | 2025-07-06 20:08:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:51.555736 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:51.557328 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:51.557365 | orchestrator | 2025-07-06 20:08:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:54.608820 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:54.608983 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:54.609080 | orchestrator | 2025-07-06 20:08:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:57.646592 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:08:57.647182 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:08:57.647255 | orchestrator | 2025-07-06 20:08:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:00.685060 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:00.685826 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:00.685864 | orchestrator | 2025-07-06 20:09:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:03.728654 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:03.730978 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:03.731044 | orchestrator | 2025-07-06 20:09:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:06.786655 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:06.786753 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:06.786777 | orchestrator | 2025-07-06 20:09:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:09.821191 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:09.822587 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:09.822632 | orchestrator | 2025-07-06 20:09:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:12.865427 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:12.865592 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:12.865634 | orchestrator | 2025-07-06 20:09:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:15.896998 | orchestrator | 2025-07-06 20:09:15 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:15.899925 | orchestrator | 2025-07-06 20:09:15 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:15.899985 | orchestrator | 2025-07-06 20:09:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:18.944486 | orchestrator | 2025-07-06 20:09:18 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:18.944631 | orchestrator | 2025-07-06 20:09:18 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:18.944647 | orchestrator | 2025-07-06 20:09:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:21.997012 | orchestrator | 2025-07-06 20:09:21 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:21.997093 | orchestrator | 2025-07-06 20:09:21 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:21.997104 | orchestrator | 2025-07-06 20:09:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:25.044312 | orchestrator | 2025-07-06 20:09:25 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:25.044412 | orchestrator | 2025-07-06 20:09:25 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:25.044427 | orchestrator | 2025-07-06 20:09:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:28.084364 | orchestrator | 2025-07-06 20:09:28 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:28.087272 | orchestrator | 2025-07-06 20:09:28 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:28.087357 | orchestrator | 2025-07-06 20:09:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:31.128993 | orchestrator | 2025-07-06 20:09:31 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:31.129117 | orchestrator | 2025-07-06 20:09:31 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:31.129143 | orchestrator | 2025-07-06 20:09:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:34.167660 | orchestrator | 2025-07-06 20:09:34 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:34.167996 | orchestrator | 2025-07-06 20:09:34 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:34.168027 | orchestrator | 2025-07-06 20:09:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:37.215955 | orchestrator | 2025-07-06 20:09:37 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:37.218330 | orchestrator | 2025-07-06 20:09:37 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:37.218396 | orchestrator | 2025-07-06 20:09:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:40.263589 | orchestrator | 2025-07-06 20:09:40 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:40.264477 | orchestrator | 2025-07-06 20:09:40 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:40.264953 | orchestrator | 2025-07-06 20:09:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:43.305771 | orchestrator | 2025-07-06 20:09:43 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:43.306996 | orchestrator | 2025-07-06 20:09:43 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:43.307043 | orchestrator | 2025-07-06 20:09:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:46.352266 | orchestrator | 2025-07-06 20:09:46 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:46.354908 | orchestrator | 2025-07-06 20:09:46 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:46.354990 | orchestrator | 2025-07-06 20:09:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:49.396772 | orchestrator | 2025-07-06 20:09:49 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:49.397178 | orchestrator | 2025-07-06 20:09:49 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:49.397227 | orchestrator | 2025-07-06 20:09:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:52.444812 | orchestrator | 2025-07-06 20:09:52 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:52.447315 | orchestrator | 2025-07-06 20:09:52 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:52.447396 | orchestrator | 2025-07-06 20:09:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:55.483060 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:55.483271 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:55.483294 | orchestrator | 2025-07-06 20:09:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:58.527885 | orchestrator | 2025-07-06 20:09:58 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:09:58.529904 | orchestrator | 2025-07-06 20:09:58 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:09:58.529959 | orchestrator | 2025-07-06 20:09:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:01.586367 | orchestrator | 2025-07-06 20:10:01 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:01.587047 | orchestrator | 2025-07-06 20:10:01 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:10:01.587095 | orchestrator | 2025-07-06 20:10:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:04.639279 | orchestrator | 2025-07-06 20:10:04 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:04.643090 | orchestrator | 2025-07-06 20:10:04 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:10:04.643716 | orchestrator | 2025-07-06 20:10:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:07.685852 | orchestrator | 2025-07-06 20:10:07 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:07.689113 | orchestrator | 2025-07-06 20:10:07 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state STARTED 2025-07-06 20:10:07.689360 | orchestrator | 2025-07-06 20:10:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:10.750770 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:10.752718 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:10.753860 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:10.762753 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task 809a34d5-2203-4c35-b189-c49140053dd9 is in state SUCCESS 2025-07-06 20:10:10.766803 | orchestrator | 2025-07-06 20:10:10.766918 | orchestrator | None 2025-07-06 20:10:10.766934 | orchestrator | 2025-07-06 20:10:10.766946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:10:10.766958 | orchestrator | 2025-07-06 20:10:10.766969 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:10:10.767004 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:00.575) 0:00:00.575 *********** 2025-07-06 20:10:10.767017 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.767029 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.767040 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.767050 | orchestrator | 2025-07-06 20:10:10.767062 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:10:10.767073 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:00.580) 0:00:01.155 *********** 2025-07-06 20:10:10.767113 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-06 20:10:10.767126 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-06 20:10:10.767137 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-06 20:10:10.767148 | orchestrator | 2025-07-06 20:10:10.767191 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-06 20:10:10.767203 | orchestrator | 2025-07-06 20:10:10.767215 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-06 20:10:10.767225 | orchestrator | Sunday 06 July 2025 20:03:58 +0000 (0:00:00.734) 0:00:01.890 *********** 2025-07-06 20:10:10.767236 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.767247 | orchestrator | 2025-07-06 20:10:10.767258 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-06 20:10:10.767269 | orchestrator | Sunday 06 July 2025 20:03:59 +0000 (0:00:00.713) 0:00:02.603 *********** 2025-07-06 20:10:10.767280 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.767291 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.767325 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.767339 | orchestrator | 2025-07-06 20:10:10.767351 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-06 20:10:10.767364 | orchestrator | Sunday 06 July 2025 20:04:00 +0000 (0:00:00.902) 0:00:03.506 *********** 2025-07-06 20:10:10.767377 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.767389 | orchestrator | 2025-07-06 20:10:10.767411 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-06 20:10:10.767525 | orchestrator | Sunday 06 July 2025 20:04:01 +0000 (0:00:01.600) 0:00:05.106 *********** 2025-07-06 20:10:10.767541 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.767554 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.767568 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.767659 | orchestrator | 2025-07-06 20:10:10.767673 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-06 20:10:10.767686 | orchestrator | Sunday 06 July 2025 20:04:02 +0000 (0:00:00.633) 0:00:05.739 *********** 2025-07-06 20:10:10.767697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:10:10.767728 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:10:10.767739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:10:10.767750 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:10:10.767760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:10:10.767771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:10:10.767782 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-06 20:10:10.767793 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-06 20:10:10.767804 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-06 20:10:10.767814 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-06 20:10:10.767825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-06 20:10:10.767836 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-06 20:10:10.767847 | orchestrator | 2025-07-06 20:10:10.767857 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-06 20:10:10.767868 | orchestrator | Sunday 06 July 2025 20:04:04 +0000 (0:00:02.463) 0:00:08.203 *********** 2025-07-06 20:10:10.767906 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-06 20:10:10.767917 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-06 20:10:10.767928 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-06 20:10:10.767939 | orchestrator | 2025-07-06 20:10:10.767950 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-06 20:10:10.767960 | orchestrator | Sunday 06 July 2025 20:04:05 +0000 (0:00:00.989) 0:00:09.192 *********** 2025-07-06 20:10:10.767971 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-06 20:10:10.767982 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-06 20:10:10.767993 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-06 20:10:10.768004 | orchestrator | 2025-07-06 20:10:10.768014 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-06 20:10:10.768025 | orchestrator | Sunday 06 July 2025 20:04:07 +0000 (0:00:01.524) 0:00:10.717 *********** 2025-07-06 20:10:10.768036 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-06 20:10:10.768047 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.768072 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-06 20:10:10.768083 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.768094 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-06 20:10:10.768105 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.768115 | orchestrator | 2025-07-06 20:10:10.768126 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-06 20:10:10.768137 | orchestrator | Sunday 06 July 2025 20:04:07 +0000 (0:00:00.498) 0:00:11.216 *********** 2025-07-06 20:10:10.768152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.768177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.768198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.768210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.768222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.768241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.768253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.768265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.768287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.768299 | orchestrator | 2025-07-06 20:10:10.768310 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-06 20:10:10.768321 | orchestrator | Sunday 06 July 2025 20:04:09 +0000 (0:00:01.746) 0:00:12.962 *********** 2025-07-06 20:10:10.768332 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.768343 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.768354 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.768365 | orchestrator | 2025-07-06 20:10:10.768376 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-06 20:10:10.768387 | orchestrator | Sunday 06 July 2025 20:04:10 +0000 (0:00:01.205) 0:00:14.168 *********** 2025-07-06 20:10:10.768397 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-06 20:10:10.768408 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-06 20:10:10.768419 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-06 20:10:10.768430 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-06 20:10:10.768441 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-06 20:10:10.768451 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-06 20:10:10.768499 | orchestrator | 2025-07-06 20:10:10.768520 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-06 20:10:10.768532 | orchestrator | Sunday 06 July 2025 20:04:13 +0000 (0:00:02.557) 0:00:16.726 *********** 2025-07-06 20:10:10.768542 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.768553 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.768564 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.768575 | orchestrator | 2025-07-06 20:10:10.768653 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-06 20:10:10.768666 | orchestrator | Sunday 06 July 2025 20:04:14 +0000 (0:00:01.456) 0:00:18.182 *********** 2025-07-06 20:10:10.768677 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.768688 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.768699 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.768710 | orchestrator | 2025-07-06 20:10:10.768721 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-06 20:10:10.768732 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:01.653) 0:00:19.835 *********** 2025-07-06 20:10:10.768744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.768766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.768785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.768798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:10:10.768816 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.768828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.768839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.768851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.768895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:10:10.769028 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.769065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.769077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.769136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.769150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:10:10.769162 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.769173 | orchestrator | 2025-07-06 20:10:10.769238 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-06 20:10:10.769252 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.947) 0:00:20.782 *********** 2025-07-06 20:10:10.769263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.769345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:10:10.769357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.769392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:10:10.769404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.769432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e', '__omit_place_holder__8779613492e665daacf74ea04b596911425fbd9e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:10:10.769444 | orchestrator | 2025-07-06 20:10:10.769455 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-06 20:10:10.769490 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:03.312) 0:00:24.095 *********** 2025-07-06 20:10:10.769511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.769630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.769642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.769661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.769729 | orchestrator | 2025-07-06 20:10:10.769741 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-06 20:10:10.769818 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:03.906) 0:00:28.001 *********** 2025-07-06 20:10:10.769892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-06 20:10:10.772615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-06 20:10:10.772657 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-06 20:10:10.772668 | orchestrator | 2025-07-06 20:10:10.772678 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-06 20:10:10.772687 | orchestrator | Sunday 06 July 2025 20:04:26 +0000 (0:00:01.858) 0:00:29.860 *********** 2025-07-06 20:10:10.772697 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-06 20:10:10.772707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-06 20:10:10.772717 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-06 20:10:10.772727 | orchestrator | 2025-07-06 20:10:10.772736 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-06 20:10:10.772746 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:04.030) 0:00:33.890 *********** 2025-07-06 20:10:10.772756 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.772765 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.772775 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.772784 | orchestrator | 2025-07-06 20:10:10.772794 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-06 20:10:10.772804 | orchestrator | Sunday 06 July 2025 20:04:31 +0000 (0:00:01.366) 0:00:35.256 *********** 2025-07-06 20:10:10.772813 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-06 20:10:10.772825 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-06 20:10:10.772835 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-06 20:10:10.772844 | orchestrator | 2025-07-06 20:10:10.772858 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-06 20:10:10.772868 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:03.304) 0:00:38.561 *********** 2025-07-06 20:10:10.772877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-06 20:10:10.772887 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-06 20:10:10.772897 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-06 20:10:10.772918 | orchestrator | 2025-07-06 20:10:10.772928 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-06 20:10:10.772938 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:02.007) 0:00:40.568 *********** 2025-07-06 20:10:10.772948 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-06 20:10:10.772957 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-06 20:10:10.772967 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-06 20:10:10.772977 | orchestrator | 2025-07-06 20:10:10.772986 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-06 20:10:10.772996 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:01.446) 0:00:42.015 *********** 2025-07-06 20:10:10.773006 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-06 20:10:10.773015 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-06 20:10:10.773025 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-06 20:10:10.773034 | orchestrator | 2025-07-06 20:10:10.773124 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-06 20:10:10.773163 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:01.362) 0:00:43.378 *********** 2025-07-06 20:10:10.773175 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.773187 | orchestrator | 2025-07-06 20:10:10.773198 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-06 20:10:10.773208 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.759) 0:00:44.138 *********** 2025-07-06 20:10:10.773222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.773248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.773260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.773276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.773295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.773308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.773321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.773333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.773349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.773360 | orchestrator | 2025-07-06 20:10:10.773369 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-06 20:10:10.773379 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:03.095) 0:00:47.234 *********** 2025-07-06 20:10:10.773389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.773416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.773427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.773437 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.773496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.773516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.773535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.773545 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.773555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.773575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.773765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.773778 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.773788 | orchestrator | 2025-07-06 20:10:10.773798 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-06 20:10:10.773808 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:00.603) 0:00:47.837 *********** 2025-07-06 20:10:10.773818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.773829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.773871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.773882 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.773924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.773946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.773960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.773971 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.773982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.773999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774064 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.774076 | orchestrator | 2025-07-06 20:10:10.774086 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-06 20:10:10.774096 | orchestrator | Sunday 06 July 2025 20:04:45 +0000 (0:00:01.349) 0:00:49.187 *********** 2025-07-06 20:10:10.774113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774156 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.774166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774197 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.774212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774249 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.774258 | orchestrator | 2025-07-06 20:10:10.774295 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-06 20:10:10.774306 | orchestrator | Sunday 06 July 2025 20:04:46 +0000 (0:00:00.600) 0:00:49.787 *********** 2025-07-06 20:10:10.774345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774389 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.774567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774638 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.774654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774685 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.774694 | orchestrator | 2025-07-06 20:10:10.774704 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-06 20:10:10.774714 | orchestrator | Sunday 06 July 2025 20:04:46 +0000 (0:00:00.587) 0:00:50.374 *********** 2025-07-06 20:10:10.774724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774769 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.774783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774842 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.774852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.774975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.774989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.774999 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.775009 | orchestrator | 2025-07-06 20:10:10.775019 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-06 20:10:10.775029 | orchestrator | Sunday 06 July 2025 20:04:48 +0000 (0:00:01.346) 0:00:51.721 *********** 2025-07-06 20:10:10.775039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775107 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.775124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775134 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.775144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775180 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.775189 | orchestrator | 2025-07-06 20:10:10.775199 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-06 20:10:10.775209 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:00.701) 0:00:52.422 *********** 2025-07-06 20:10:10.775219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775262 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.775272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775307 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.775318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775355 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.775365 | orchestrator | 2025-07-06 20:10:10.775375 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-06 20:10:10.775390 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:00.780) 0:00:53.203 *********** 2025-07-06 20:10:10.775400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775435 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.775445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775627 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.775650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:10:10.775661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:10:10.775670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:10:10.775678 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.775686 | orchestrator | 2025-07-06 20:10:10.775700 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-06 20:10:10.775709 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:01.336) 0:00:54.539 *********** 2025-07-06 20:10:10.775725 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-06 20:10:10.775733 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-06 20:10:10.775741 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-06 20:10:10.775749 | orchestrator | 2025-07-06 20:10:10.775775 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-06 20:10:10.775783 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:01.588) 0:00:56.128 *********** 2025-07-06 20:10:10.775791 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-06 20:10:10.775799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-06 20:10:10.775807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-06 20:10:10.775815 | orchestrator | 2025-07-06 20:10:10.775823 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-06 20:10:10.775831 | orchestrator | Sunday 06 July 2025 20:04:54 +0000 (0:00:01.413) 0:00:57.542 *********** 2025-07-06 20:10:10.775839 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:10:10.775847 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:10:10.775855 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:10:10.775863 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:10:10.775871 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.775879 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:10:10.775887 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.775894 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:10:10.775902 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.775910 | orchestrator | 2025-07-06 20:10:10.775918 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-06 20:10:10.775926 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:01.155) 0:00:58.697 *********** 2025-07-06 20:10:10.775941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.775950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.775959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.775976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:10:10.775985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.775994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.776002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:10:10.776016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.776025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:10:10.776042 | orchestrator | 2025-07-06 20:10:10.776050 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-06 20:10:10.776058 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:03.674) 0:01:02.372 *********** 2025-07-06 20:10:10.776066 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.776074 | orchestrator | 2025-07-06 20:10:10.776082 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-06 20:10:10.776090 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.744) 0:01:03.116 *********** 2025-07-06 20:10:10.776102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-06 20:10:10.776112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.776121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.776130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-06 20:10:10.778336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.778414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-06 20:10:10.778455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.778532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778592 | orchestrator | 2025-07-06 20:10:10.778613 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-06 20:10:10.778631 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:03.397) 0:01:06.514 *********** 2025-07-06 20:10:10.778659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-06 20:10:10.778681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.778703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778745 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.778770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-06 20:10:10.778792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.778804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778855 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.778867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-06 20:10:10.778878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.778904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.778928 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.778939 | orchestrator | 2025-07-06 20:10:10.778950 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-06 20:10:10.778962 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:00.694) 0:01:07.209 *********** 2025-07-06 20:10:10.778973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:10:10.778990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:10:10.779003 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.779014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:10:10.779025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:10:10.779037 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.779048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:10:10.779059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:10:10.779070 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.779081 | orchestrator | 2025-07-06 20:10:10.779092 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-06 20:10:10.779104 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:01.206) 0:01:08.415 *********** 2025-07-06 20:10:10.779115 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.779125 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.779136 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.779147 | orchestrator | 2025-07-06 20:10:10.779158 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-06 20:10:10.779169 | orchestrator | Sunday 06 July 2025 20:05:06 +0000 (0:00:01.384) 0:01:09.800 *********** 2025-07-06 20:10:10.779180 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.779191 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.779209 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.779220 | orchestrator | 2025-07-06 20:10:10.779231 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-06 20:10:10.779242 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:01.906) 0:01:11.707 *********** 2025-07-06 20:10:10.779253 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.779264 | orchestrator | 2025-07-06 20:10:10.779275 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-06 20:10:10.779286 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.616) 0:01:12.323 *********** 2025-07-06 20:10:10.779308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.779320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.779361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.779415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779444 | orchestrator | 2025-07-06 20:10:10.779456 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-06 20:10:10.779538 | orchestrator | Sunday 06 July 2025 20:05:13 +0000 (0:00:04.177) 0:01:16.500 *********** 2025-07-06 20:10:10.779551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.779572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779603 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.779615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.779632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779662 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.779674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.779691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.779715 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.779726 | orchestrator | 2025-07-06 20:10:10.779737 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-06 20:10:10.779748 | orchestrator | Sunday 06 July 2025 20:05:13 +0000 (0:00:00.580) 0:01:17.081 *********** 2025-07-06 20:10:10.779759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:10:10.779771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:10:10.779783 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.779794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:10:10.779810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:10:10.779822 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.779845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:10:10.779856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:10:10.779867 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.779878 | orchestrator | 2025-07-06 20:10:10.779889 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-06 20:10:10.779900 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.706) 0:01:17.788 *********** 2025-07-06 20:10:10.779911 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.779922 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.779933 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.779944 | orchestrator | 2025-07-06 20:10:10.779955 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-06 20:10:10.779966 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:01.603) 0:01:19.392 *********** 2025-07-06 20:10:10.779977 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.779988 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.779999 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.780010 | orchestrator | 2025-07-06 20:10:10.780020 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-06 20:10:10.780031 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:02.121) 0:01:21.513 *********** 2025-07-06 20:10:10.780042 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.780053 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.780064 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.780075 | orchestrator | 2025-07-06 20:10:10.780086 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-06 20:10:10.780097 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:00.337) 0:01:21.851 *********** 2025-07-06 20:10:10.780107 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.780118 | orchestrator | 2025-07-06 20:10:10.780129 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-06 20:10:10.780140 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.746) 0:01:22.597 *********** 2025-07-06 20:10:10.780173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-06 20:10:10.780187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-06 20:10:10.780211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-06 20:10:10.780223 | orchestrator | 2025-07-06 20:10:10.780234 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-06 20:10:10.780245 | orchestrator | Sunday 06 July 2025 20:05:22 +0000 (0:00:03.204) 0:01:25.802 *********** 2025-07-06 20:10:10.780257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-06 20:10:10.780268 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.780279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-06 20:10:10.780306 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.780326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-06 20:10:10.780349 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.780361 | orchestrator | 2025-07-06 20:10:10.780379 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-06 20:10:10.780390 | orchestrator | Sunday 06 July 2025 20:05:24 +0000 (0:00:02.152) 0:01:27.954 *********** 2025-07-06 20:10:10.780402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:10:10.780420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:10:10.780433 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.780445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:10:10.780456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:10:10.780485 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.780496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:10:10.780508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:10:10.780519 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.780531 | orchestrator | 2025-07-06 20:10:10.780542 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-06 20:10:10.780553 | orchestrator | Sunday 06 July 2025 20:05:26 +0000 (0:00:01.784) 0:01:29.739 *********** 2025-07-06 20:10:10.780563 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.780574 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.780585 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.780596 | orchestrator | 2025-07-06 20:10:10.780607 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-06 20:10:10.780618 | orchestrator | Sunday 06 July 2025 20:05:27 +0000 (0:00:01.002) 0:01:30.741 *********** 2025-07-06 20:10:10.780629 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.780639 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.780650 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.780690 | orchestrator | 2025-07-06 20:10:10.780702 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-06 20:10:10.780727 | orchestrator | Sunday 06 July 2025 20:05:28 +0000 (0:00:01.370) 0:01:32.112 *********** 2025-07-06 20:10:10.780738 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.780749 | orchestrator | 2025-07-06 20:10:10.780760 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-06 20:10:10.780771 | orchestrator | Sunday 06 July 2025 20:05:29 +0000 (0:00:00.714) 0:01:32.826 *********** 2025-07-06 20:10:10.780783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.780802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.780886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.780904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.780989 | orchestrator | 2025-07-06 20:10:10.781000 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-06 20:10:10.781011 | orchestrator | Sunday 06 July 2025 20:05:32 +0000 (0:00:03.330) 0:01:36.157 *********** 2025-07-06 20:10:10.781027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.781039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781088 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.781099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.781116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781157 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.781175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.781187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781226 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.781237 | orchestrator | 2025-07-06 20:10:10.781248 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-06 20:10:10.781260 | orchestrator | Sunday 06 July 2025 20:05:33 +0000 (0:00:01.053) 0:01:37.210 *********** 2025-07-06 20:10:10.781282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:10:10.781293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:10:10.781304 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.781315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:10:10.781326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:10:10.781337 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.781354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:10:10.781366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:10:10.781378 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.781388 | orchestrator | 2025-07-06 20:10:10.781399 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-06 20:10:10.781410 | orchestrator | Sunday 06 July 2025 20:05:34 +0000 (0:00:00.896) 0:01:38.106 *********** 2025-07-06 20:10:10.781421 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.781432 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.781443 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.781454 | orchestrator | 2025-07-06 20:10:10.781483 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-06 20:10:10.781495 | orchestrator | Sunday 06 July 2025 20:05:36 +0000 (0:00:01.371) 0:01:39.477 *********** 2025-07-06 20:10:10.781506 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.781517 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.781528 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.781539 | orchestrator | 2025-07-06 20:10:10.781549 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-06 20:10:10.781560 | orchestrator | Sunday 06 July 2025 20:05:38 +0000 (0:00:02.009) 0:01:41.487 *********** 2025-07-06 20:10:10.781571 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.781582 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.781592 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.781603 | orchestrator | 2025-07-06 20:10:10.781614 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-06 20:10:10.781625 | orchestrator | Sunday 06 July 2025 20:05:38 +0000 (0:00:00.530) 0:01:42.018 *********** 2025-07-06 20:10:10.781636 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.781647 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.781658 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.781669 | orchestrator | 2025-07-06 20:10:10.781680 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-06 20:10:10.781690 | orchestrator | Sunday 06 July 2025 20:05:38 +0000 (0:00:00.287) 0:01:42.305 *********** 2025-07-06 20:10:10.781706 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.781717 | orchestrator | 2025-07-06 20:10:10.781728 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-06 20:10:10.781739 | orchestrator | Sunday 06 July 2025 20:05:39 +0000 (0:00:00.750) 0:01:43.056 *********** 2025-07-06 20:10:10.781757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:10:10.781770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:10:10.781788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:10:10.781800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:10:10.781828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:10:10.781846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:10:10.781898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.781996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782137 | orchestrator | 2025-07-06 20:10:10.782148 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-06 20:10:10.782159 | orchestrator | Sunday 06 July 2025 20:05:44 +0000 (0:00:04.752) 0:01:47.808 *********** 2025-07-06 20:10:10.782185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:10:10.782197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:10:10.782209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782279 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.782301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:10:10.782318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:10:10.782336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:10:10.782349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:10:10.782360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782491 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.782510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.782547 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.782558 | orchestrator | 2025-07-06 20:10:10.782570 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-06 20:10:10.782601 | orchestrator | Sunday 06 July 2025 20:05:45 +0000 (0:00:01.116) 0:01:48.925 *********** 2025-07-06 20:10:10.782613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:10:10.782625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:10:10.782636 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.782652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:10:10.782664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:10:10.782675 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.782686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:10:10.782697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:10:10.782708 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.782719 | orchestrator | 2025-07-06 20:10:10.782730 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-06 20:10:10.782741 | orchestrator | Sunday 06 July 2025 20:05:47 +0000 (0:00:01.545) 0:01:50.471 *********** 2025-07-06 20:10:10.782752 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.782764 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.782774 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.782785 | orchestrator | 2025-07-06 20:10:10.782796 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-06 20:10:10.782807 | orchestrator | Sunday 06 July 2025 20:05:49 +0000 (0:00:02.201) 0:01:52.672 *********** 2025-07-06 20:10:10.782818 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.782829 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.782840 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.782850 | orchestrator | 2025-07-06 20:10:10.782861 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-06 20:10:10.782872 | orchestrator | Sunday 06 July 2025 20:05:51 +0000 (0:00:02.224) 0:01:54.896 *********** 2025-07-06 20:10:10.782883 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.782894 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.782905 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.782916 | orchestrator | 2025-07-06 20:10:10.782927 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-06 20:10:10.782938 | orchestrator | Sunday 06 July 2025 20:05:51 +0000 (0:00:00.379) 0:01:55.276 *********** 2025-07-06 20:10:10.782949 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.782967 | orchestrator | 2025-07-06 20:10:10.782978 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-06 20:10:10.782989 | orchestrator | Sunday 06 July 2025 20:05:52 +0000 (0:00:00.934) 0:01:56.210 *********** 2025-07-06 20:10:10.783011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:10:10.783046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.783067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:10:10.783094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.783114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:10:10.783139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.783151 | orchestrator | 2025-07-06 20:10:10.783163 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-06 20:10:10.783174 | orchestrator | Sunday 06 July 2025 20:05:59 +0000 (0:00:06.466) 0:02:02.677 *********** 2025-07-06 20:10:10.783192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:10:10.783215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.783228 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.783240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:10:10.783267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.783285 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.783296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:10:10.783324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.783337 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.783348 | orchestrator | 2025-07-06 20:10:10.783359 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-06 20:10:10.783370 | orchestrator | Sunday 06 July 2025 20:06:03 +0000 (0:00:04.368) 0:02:07.045 *********** 2025-07-06 20:10:10.783386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:10:10.783399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:10:10.783411 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.783422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:10:10.783440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:10:10.783452 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.783480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:10:10.783500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:10:10.783512 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.783523 | orchestrator | 2025-07-06 20:10:10.783534 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-06 20:10:10.783545 | orchestrator | Sunday 06 July 2025 20:06:08 +0000 (0:00:04.871) 0:02:11.916 *********** 2025-07-06 20:10:10.783556 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.783567 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.783578 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.783589 | orchestrator | 2025-07-06 20:10:10.783600 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-06 20:10:10.783611 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:01.531) 0:02:13.448 *********** 2025-07-06 20:10:10.783622 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.783633 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.783644 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.783655 | orchestrator | 2025-07-06 20:10:10.783666 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-06 20:10:10.783677 | orchestrator | Sunday 06 July 2025 20:06:11 +0000 (0:00:01.824) 0:02:15.273 *********** 2025-07-06 20:10:10.783687 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.783698 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.783709 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.783720 | orchestrator | 2025-07-06 20:10:10.783731 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-06 20:10:10.783742 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.308) 0:02:15.581 *********** 2025-07-06 20:10:10.783752 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.783763 | orchestrator | 2025-07-06 20:10:10.783782 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-06 20:10:10.783793 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:00.888) 0:02:16.469 *********** 2025-07-06 20:10:10.783805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:10:10.783823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:10:10.783835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:10:10.783847 | orchestrator | 2025-07-06 20:10:10.783857 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-06 20:10:10.783874 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:04.127) 0:02:20.596 *********** 2025-07-06 20:10:10.783902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:10:10.783923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:10:10.783941 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.783959 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.783984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:10:10.784010 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.784027 | orchestrator | 2025-07-06 20:10:10.784044 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-06 20:10:10.784061 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:00.323) 0:02:20.919 *********** 2025-07-06 20:10:10.784078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:10:10.784096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:10:10.784115 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.784134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:10:10.784153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:10:10.784171 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.784189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:10:10.784204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:10:10.784224 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.784243 | orchestrator | 2025-07-06 20:10:10.784262 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-06 20:10:10.784278 | orchestrator | Sunday 06 July 2025 20:06:18 +0000 (0:00:00.537) 0:02:21.456 *********** 2025-07-06 20:10:10.784296 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.784314 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.784332 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.784350 | orchestrator | 2025-07-06 20:10:10.784367 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-06 20:10:10.784386 | orchestrator | Sunday 06 July 2025 20:06:19 +0000 (0:00:01.337) 0:02:22.794 *********** 2025-07-06 20:10:10.784404 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.784423 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.784442 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.784460 | orchestrator | 2025-07-06 20:10:10.784531 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-06 20:10:10.784549 | orchestrator | Sunday 06 July 2025 20:06:21 +0000 (0:00:01.734) 0:02:24.529 *********** 2025-07-06 20:10:10.784568 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.784587 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.784605 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.784623 | orchestrator | 2025-07-06 20:10:10.784641 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-06 20:10:10.784660 | orchestrator | Sunday 06 July 2025 20:06:21 +0000 (0:00:00.256) 0:02:24.785 *********** 2025-07-06 20:10:10.784678 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.784709 | orchestrator | 2025-07-06 20:10:10.784728 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-06 20:10:10.784746 | orchestrator | Sunday 06 July 2025 20:06:22 +0000 (0:00:00.813) 0:02:25.598 *********** 2025-07-06 20:10:10.784776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:10:10.784810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:10:10.784849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:10:10.784869 | orchestrator | 2025-07-06 20:10:10.784887 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-06 20:10:10.784905 | orchestrator | Sunday 06 July 2025 20:06:25 +0000 (0:00:03.022) 0:02:28.621 *********** 2025-07-06 20:10:10.784937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:10:10.784967 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.784993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:10:10.785013 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.785050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:10:10.785081 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.785100 | orchestrator | 2025-07-06 20:10:10.785120 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-06 20:10:10.785139 | orchestrator | Sunday 06 July 2025 20:06:25 +0000 (0:00:00.785) 0:02:29.406 *********** 2025-07-06 20:10:10.785158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:10:10.785180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:10:10.785200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:10:10.785219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:10:10.785238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-06 20:10:10.785257 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.785276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:10:10.785295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:10:10.785607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:10:10.785636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:10:10.785656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-06 20:10:10.785674 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.785693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:10:10.785712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:10:10.785740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:10:10.785759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:10:10.785778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-06 20:10:10.785796 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.785814 | orchestrator | 2025-07-06 20:10:10.785833 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-06 20:10:10.785852 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:01.409) 0:02:30.816 *********** 2025-07-06 20:10:10.785870 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.785888 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.785905 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.785922 | orchestrator | 2025-07-06 20:10:10.785940 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-06 20:10:10.785958 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:01.471) 0:02:32.288 *********** 2025-07-06 20:10:10.785976 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.785995 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.786013 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.786120 | orchestrator | 2025-07-06 20:10:10.786140 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-06 20:10:10.786160 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:01.985) 0:02:34.274 *********** 2025-07-06 20:10:10.786194 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.786213 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.786233 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.786252 | orchestrator | 2025-07-06 20:10:10.786271 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-06 20:10:10.786290 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.301) 0:02:34.575 *********** 2025-07-06 20:10:10.786309 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.786329 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.786348 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.786368 | orchestrator | 2025-07-06 20:10:10.786388 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-06 20:10:10.786408 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.261) 0:02:34.836 *********** 2025-07-06 20:10:10.786429 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.786448 | orchestrator | 2025-07-06 20:10:10.786492 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-06 20:10:10.786511 | orchestrator | Sunday 06 July 2025 20:06:32 +0000 (0:00:00.974) 0:02:35.811 *********** 2025-07-06 20:10:10.786546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:10:10.786569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:10:10.786606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:10:10.786628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:10:10.786661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:10:10.786689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:10:10.786710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:10:10.786737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:10:10.786757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:10:10.786786 | orchestrator | 2025-07-06 20:10:10.786804 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-06 20:10:10.786823 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:03.495) 0:02:39.307 *********** 2025-07-06 20:10:10.786844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:10:10.786873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:10:10.786894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:10:10.786912 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.786938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:10:10.786959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:10:10.786988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:10:10.787017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:10:10.787037 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.787056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:10:10.787076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:10:10.787095 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.787114 | orchestrator | 2025-07-06 20:10:10.787140 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-06 20:10:10.787159 | orchestrator | Sunday 06 July 2025 20:06:36 +0000 (0:00:00.498) 0:02:39.806 *********** 2025-07-06 20:10:10.787179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:10:10.787213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:10:10.787232 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.787251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:10:10.787271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:10:10.787291 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.787310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:10:10.787329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:10:10.787348 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.787367 | orchestrator | 2025-07-06 20:10:10.787386 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-06 20:10:10.787405 | orchestrator | Sunday 06 July 2025 20:06:37 +0000 (0:00:00.842) 0:02:40.648 *********** 2025-07-06 20:10:10.787423 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.787442 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.787460 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.787502 | orchestrator | 2025-07-06 20:10:10.787521 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-06 20:10:10.787540 | orchestrator | Sunday 06 July 2025 20:06:38 +0000 (0:00:01.197) 0:02:41.846 *********** 2025-07-06 20:10:10.787558 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.787578 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.787597 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.787615 | orchestrator | 2025-07-06 20:10:10.787634 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-06 20:10:10.787661 | orchestrator | Sunday 06 July 2025 20:06:40 +0000 (0:00:01.778) 0:02:43.625 *********** 2025-07-06 20:10:10.787681 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.787699 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.787717 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.787734 | orchestrator | 2025-07-06 20:10:10.787752 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-06 20:10:10.787771 | orchestrator | Sunday 06 July 2025 20:06:40 +0000 (0:00:00.250) 0:02:43.875 *********** 2025-07-06 20:10:10.787790 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.787809 | orchestrator | 2025-07-06 20:10:10.787828 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-06 20:10:10.787846 | orchestrator | Sunday 06 July 2025 20:06:41 +0000 (0:00:00.990) 0:02:44.866 *********** 2025-07-06 20:10:10.787866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:10:10.787903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.787924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:10:10.787943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.787973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:10:10.788002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.788021 | orchestrator | 2025-07-06 20:10:10.788040 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-06 20:10:10.788065 | orchestrator | Sunday 06 July 2025 20:06:44 +0000 (0:00:02.962) 0:02:47.828 *********** 2025-07-06 20:10:10.788087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:10:10.788107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.788127 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.788154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:10:10.788174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.788202 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.788227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:10:10.788247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.788265 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.788283 | orchestrator | 2025-07-06 20:10:10.788301 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-06 20:10:10.788320 | orchestrator | Sunday 06 July 2025 20:06:44 +0000 (0:00:00.548) 0:02:48.376 *********** 2025-07-06 20:10:10.788339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:10:10.788359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:10:10.788377 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.788395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:10:10.788414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:10:10.788434 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.788452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:10:10.788539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:10:10.788579 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.788598 | orchestrator | 2025-07-06 20:10:10.788617 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-06 20:10:10.788635 | orchestrator | Sunday 06 July 2025 20:06:46 +0000 (0:00:01.155) 0:02:49.532 *********** 2025-07-06 20:10:10.788652 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.788671 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.788690 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.788708 | orchestrator | 2025-07-06 20:10:10.788727 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-06 20:10:10.788745 | orchestrator | Sunday 06 July 2025 20:06:47 +0000 (0:00:01.273) 0:02:50.806 *********** 2025-07-06 20:10:10.788763 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.788782 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.788799 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.788818 | orchestrator | 2025-07-06 20:10:10.788837 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-06 20:10:10.788855 | orchestrator | Sunday 06 July 2025 20:06:49 +0000 (0:00:01.923) 0:02:52.729 *********** 2025-07-06 20:10:10.788873 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.788891 | orchestrator | 2025-07-06 20:10:10.788909 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-06 20:10:10.788927 | orchestrator | Sunday 06 July 2025 20:06:50 +0000 (0:00:00.999) 0:02:53.729 *********** 2025-07-06 20:10:10.788959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-06 20:10:10.788979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-06 20:10:10.788998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-06 20:10:10.789155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789214 | orchestrator | 2025-07-06 20:10:10.789229 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-06 20:10:10.789246 | orchestrator | Sunday 06 July 2025 20:06:53 +0000 (0:00:03.379) 0:02:57.109 *********** 2025-07-06 20:10:10.789267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-06 20:10:10.789285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789343 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.789367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-06 20:10:10.789384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789435 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.789452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-06 20:10:10.789505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.789582 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.789598 | orchestrator | 2025-07-06 20:10:10.789615 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-06 20:10:10.789631 | orchestrator | Sunday 06 July 2025 20:06:54 +0000 (0:00:00.565) 0:02:57.674 *********** 2025-07-06 20:10:10.789652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:10:10.789669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:10:10.789686 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.789702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:10:10.789719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:10:10.789745 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.789761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:10:10.789777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:10:10.789794 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.789810 | orchestrator | 2025-07-06 20:10:10.789826 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-06 20:10:10.789843 | orchestrator | Sunday 06 July 2025 20:06:55 +0000 (0:00:00.780) 0:02:58.455 *********** 2025-07-06 20:10:10.789859 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.789875 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.789891 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.789907 | orchestrator | 2025-07-06 20:10:10.789923 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-06 20:10:10.789939 | orchestrator | Sunday 06 July 2025 20:06:56 +0000 (0:00:01.426) 0:02:59.881 *********** 2025-07-06 20:10:10.789955 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.789971 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.789987 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.790004 | orchestrator | 2025-07-06 20:10:10.790098 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-06 20:10:10.790120 | orchestrator | Sunday 06 July 2025 20:06:58 +0000 (0:00:01.848) 0:03:01.730 *********** 2025-07-06 20:10:10.790136 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.790153 | orchestrator | 2025-07-06 20:10:10.790169 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-06 20:10:10.790186 | orchestrator | Sunday 06 July 2025 20:06:59 +0000 (0:00:00.963) 0:03:02.694 *********** 2025-07-06 20:10:10.790203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:10:10.790219 | orchestrator | 2025-07-06 20:10:10.790235 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-06 20:10:10.790252 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:02.719) 0:03:05.413 *********** 2025-07-06 20:10:10.790303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:10:10.790333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:10:10.790350 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.790391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:10:10.790413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:10:10.790430 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.790454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:10:10.790507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:10:10.790525 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.790540 | orchestrator | 2025-07-06 20:10:10.790556 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-06 20:10:10.790572 | orchestrator | Sunday 06 July 2025 20:07:04 +0000 (0:00:02.160) 0:03:07.574 *********** 2025-07-06 20:10:10.790614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:10:10.790655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:10:10.790675 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.790735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:10:10.790791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:10:10.790809 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.790833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:10:10.790861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:10:10.790878 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.790894 | orchestrator | 2025-07-06 20:10:10.790910 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-06 20:10:10.790926 | orchestrator | Sunday 06 July 2025 20:07:06 +0000 (0:00:01.975) 0:03:09.549 *********** 2025-07-06 20:10:10.790943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:10:10.790999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:10:10.791018 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.791035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:10:10.791052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:10:10.791081 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.791103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:10:10.791120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:10:10.791151 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.791168 | orchestrator | 2025-07-06 20:10:10.791184 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-06 20:10:10.791201 | orchestrator | Sunday 06 July 2025 20:07:08 +0000 (0:00:02.574) 0:03:12.123 *********** 2025-07-06 20:10:10.791217 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.791233 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.791249 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.791265 | orchestrator | 2025-07-06 20:10:10.791281 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-06 20:10:10.791298 | orchestrator | Sunday 06 July 2025 20:07:10 +0000 (0:00:02.252) 0:03:14.376 *********** 2025-07-06 20:10:10.791314 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.791330 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.791345 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.791376 | orchestrator | 2025-07-06 20:10:10.791392 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-06 20:10:10.791409 | orchestrator | Sunday 06 July 2025 20:07:12 +0000 (0:00:01.453) 0:03:15.829 *********** 2025-07-06 20:10:10.791425 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.791441 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.791457 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.791497 | orchestrator | 2025-07-06 20:10:10.791514 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-06 20:10:10.791531 | orchestrator | Sunday 06 July 2025 20:07:12 +0000 (0:00:00.346) 0:03:16.175 *********** 2025-07-06 20:10:10.791547 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.791563 | orchestrator | 2025-07-06 20:10:10.791579 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-06 20:10:10.791595 | orchestrator | Sunday 06 July 2025 20:07:13 +0000 (0:00:01.058) 0:03:17.234 *********** 2025-07-06 20:10:10.791638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-06 20:10:10.791668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-06 20:10:10.791692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-06 20:10:10.791709 | orchestrator | 2025-07-06 20:10:10.791725 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-06 20:10:10.791741 | orchestrator | Sunday 06 July 2025 20:07:15 +0000 (0:00:01.799) 0:03:19.033 *********** 2025-07-06 20:10:10.791757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-06 20:10:10.791775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-06 20:10:10.791792 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.791809 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.791856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-06 20:10:10.791874 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.791889 | orchestrator | 2025-07-06 20:10:10.791905 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-06 20:10:10.791921 | orchestrator | Sunday 06 July 2025 20:07:15 +0000 (0:00:00.381) 0:03:19.415 *********** 2025-07-06 20:10:10.791938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-06 20:10:10.791953 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.791966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-06 20:10:10.791979 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.791993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-06 20:10:10.792011 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.792025 | orchestrator | 2025-07-06 20:10:10.792038 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-06 20:10:10.792051 | orchestrator | Sunday 06 July 2025 20:07:16 +0000 (0:00:00.599) 0:03:20.015 *********** 2025-07-06 20:10:10.792063 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.792076 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.792089 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.792101 | orchestrator | 2025-07-06 20:10:10.792114 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-06 20:10:10.792126 | orchestrator | Sunday 06 July 2025 20:07:17 +0000 (0:00:00.713) 0:03:20.728 *********** 2025-07-06 20:10:10.792139 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.792151 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.792164 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.792177 | orchestrator | 2025-07-06 20:10:10.792190 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-06 20:10:10.792203 | orchestrator | Sunday 06 July 2025 20:07:18 +0000 (0:00:01.212) 0:03:21.941 *********** 2025-07-06 20:10:10.792216 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.792229 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.792242 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.792254 | orchestrator | 2025-07-06 20:10:10.792267 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-06 20:10:10.792301 | orchestrator | Sunday 06 July 2025 20:07:18 +0000 (0:00:00.325) 0:03:22.267 *********** 2025-07-06 20:10:10.792315 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.792328 | orchestrator | 2025-07-06 20:10:10.792341 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-06 20:10:10.792363 | orchestrator | Sunday 06 July 2025 20:07:20 +0000 (0:00:01.396) 0:03:23.663 *********** 2025-07-06 20:10:10.792378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:10:10.792410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:10:10.792536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.792585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.792600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.792633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:10:10.792669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.792701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.792736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.792751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:10:10.792810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:10:10.792845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:10:10.792945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.792961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:10:10.793344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.793516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.793578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.793700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.793720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.793758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.793791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.793827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793865 | orchestrator | 2025-07-06 20:10:10.793879 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-06 20:10:10.793893 | orchestrator | Sunday 06 July 2025 20:07:24 +0000 (0:00:04.411) 0:03:28.074 *********** 2025-07-06 20:10:10.793906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:10:10.793920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.793967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:10:10.794074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:10:10.794119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:10:10.794286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.794299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.794528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.794543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:10:10.794600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.794618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794691 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.794722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.794793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:10:10.794823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.794861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794919 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.794934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.794948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.794962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.795000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.795012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:10:10.795028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:10:10.795039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.795051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:10:10.795074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:10:10.795107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.795120 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.795130 | orchestrator | 2025-07-06 20:10:10.795142 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-06 20:10:10.795153 | orchestrator | Sunday 06 July 2025 20:07:26 +0000 (0:00:01.725) 0:03:29.799 *********** 2025-07-06 20:10:10.795174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:10:10.795187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:10:10.795198 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.795208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:10:10.795227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:10:10.795238 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.795249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:10:10.795260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:10:10.795271 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.795282 | orchestrator | 2025-07-06 20:10:10.795293 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-06 20:10:10.795304 | orchestrator | Sunday 06 July 2025 20:07:28 +0000 (0:00:02.556) 0:03:32.356 *********** 2025-07-06 20:10:10.795314 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.795325 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.795336 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.795346 | orchestrator | 2025-07-06 20:10:10.795357 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-06 20:10:10.795368 | orchestrator | Sunday 06 July 2025 20:07:30 +0000 (0:00:01.321) 0:03:33.678 *********** 2025-07-06 20:10:10.795379 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.795390 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.795401 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.795411 | orchestrator | 2025-07-06 20:10:10.795422 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-06 20:10:10.795432 | orchestrator | Sunday 06 July 2025 20:07:32 +0000 (0:00:02.049) 0:03:35.727 *********** 2025-07-06 20:10:10.795443 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.795460 | orchestrator | 2025-07-06 20:10:10.795523 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-06 20:10:10.795534 | orchestrator | Sunday 06 July 2025 20:07:33 +0000 (0:00:01.216) 0:03:36.943 *********** 2025-07-06 20:10:10.795546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.795576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.795588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.795599 | orchestrator | 2025-07-06 20:10:10.795610 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-06 20:10:10.795621 | orchestrator | Sunday 06 July 2025 20:07:37 +0000 (0:00:04.072) 0:03:41.016 *********** 2025-07-06 20:10:10.795633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.795651 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.795681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.795692 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.795720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.795732 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.795744 | orchestrator | 2025-07-06 20:10:10.795756 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-06 20:10:10.795767 | orchestrator | Sunday 06 July 2025 20:07:38 +0000 (0:00:00.684) 0:03:41.700 *********** 2025-07-06 20:10:10.795778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:10:10.795790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:10:10.795802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:10:10.795814 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.795830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:10:10.795842 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.795854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:10:10.795866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:10:10.795885 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.795896 | orchestrator | 2025-07-06 20:10:10.795907 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-06 20:10:10.795919 | orchestrator | Sunday 06 July 2025 20:07:39 +0000 (0:00:00.778) 0:03:42.478 *********** 2025-07-06 20:10:10.795930 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.795940 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.795951 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.795962 | orchestrator | 2025-07-06 20:10:10.795974 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-06 20:10:10.795985 | orchestrator | Sunday 06 July 2025 20:07:40 +0000 (0:00:01.668) 0:03:44.147 *********** 2025-07-06 20:10:10.795996 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.796008 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.796019 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.796030 | orchestrator | 2025-07-06 20:10:10.796041 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-06 20:10:10.796053 | orchestrator | Sunday 06 July 2025 20:07:43 +0000 (0:00:02.388) 0:03:46.535 *********** 2025-07-06 20:10:10.796065 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.796076 | orchestrator | 2025-07-06 20:10:10.796087 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-06 20:10:10.796099 | orchestrator | Sunday 06 July 2025 20:07:44 +0000 (0:00:01.311) 0:03:47.847 *********** 2025-07-06 20:10:10.796127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.796221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.796272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.796327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796356 | orchestrator | 2025-07-06 20:10:10.796367 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-06 20:10:10.796377 | orchestrator | Sunday 06 July 2025 20:07:48 +0000 (0:00:04.469) 0:03:52.316 *********** 2025-07-06 20:10:10.796389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.796418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796443 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.796483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.796497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796521 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.796549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.796563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.796598 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.796609 | orchestrator | 2025-07-06 20:10:10.796621 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-06 20:10:10.796633 | orchestrator | Sunday 06 July 2025 20:07:49 +0000 (0:00:00.944) 0:03:53.260 *********** 2025-07-06 20:10:10.796644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796692 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.796704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796750 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.796776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:10:10.796829 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.796840 | orchestrator | 2025-07-06 20:10:10.796851 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-06 20:10:10.796863 | orchestrator | Sunday 06 July 2025 20:07:50 +0000 (0:00:00.907) 0:03:54.168 *********** 2025-07-06 20:10:10.796874 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.796886 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.796897 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.796907 | orchestrator | 2025-07-06 20:10:10.796919 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-06 20:10:10.796930 | orchestrator | Sunday 06 July 2025 20:07:52 +0000 (0:00:01.771) 0:03:55.939 *********** 2025-07-06 20:10:10.796941 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.796953 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.796964 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.796975 | orchestrator | 2025-07-06 20:10:10.796985 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-06 20:10:10.796997 | orchestrator | Sunday 06 July 2025 20:07:54 +0000 (0:00:02.201) 0:03:58.141 *********** 2025-07-06 20:10:10.797017 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.797028 | orchestrator | 2025-07-06 20:10:10.797039 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-06 20:10:10.797050 | orchestrator | Sunday 06 July 2025 20:07:56 +0000 (0:00:01.564) 0:03:59.706 *********** 2025-07-06 20:10:10.797062 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-06 20:10:10.797074 | orchestrator | 2025-07-06 20:10:10.797085 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-06 20:10:10.797096 | orchestrator | Sunday 06 July 2025 20:07:57 +0000 (0:00:01.165) 0:04:00.871 *********** 2025-07-06 20:10:10.797108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-06 20:10:10.797121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-06 20:10:10.797133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-06 20:10:10.797145 | orchestrator | 2025-07-06 20:10:10.797162 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-06 20:10:10.797239 | orchestrator | Sunday 06 July 2025 20:08:01 +0000 (0:00:04.501) 0:04:05.372 *********** 2025-07-06 20:10:10.797269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797282 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.797293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797305 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.797315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797328 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.797339 | orchestrator | 2025-07-06 20:10:10.797351 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-06 20:10:10.797363 | orchestrator | Sunday 06 July 2025 20:08:03 +0000 (0:00:01.522) 0:04:06.895 *********** 2025-07-06 20:10:10.797379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:10:10.797391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:10:10.797404 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.797415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:10:10.797426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:10:10.797438 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.797449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:10:10.797461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:10:10.797496 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.797508 | orchestrator | 2025-07-06 20:10:10.797520 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-06 20:10:10.797531 | orchestrator | Sunday 06 July 2025 20:08:05 +0000 (0:00:02.024) 0:04:08.919 *********** 2025-07-06 20:10:10.797542 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.797552 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.797563 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.797573 | orchestrator | 2025-07-06 20:10:10.797584 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-06 20:10:10.797594 | orchestrator | Sunday 06 July 2025 20:08:07 +0000 (0:00:02.374) 0:04:11.294 *********** 2025-07-06 20:10:10.797605 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.797616 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.797626 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.797637 | orchestrator | 2025-07-06 20:10:10.797648 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-06 20:10:10.797658 | orchestrator | Sunday 06 July 2025 20:08:10 +0000 (0:00:03.075) 0:04:14.370 *********** 2025-07-06 20:10:10.797670 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-06 20:10:10.797680 | orchestrator | 2025-07-06 20:10:10.797691 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-06 20:10:10.797720 | orchestrator | Sunday 06 July 2025 20:08:11 +0000 (0:00:00.787) 0:04:15.157 *********** 2025-07-06 20:10:10.797732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797744 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.797756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797784 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.797795 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.797806 | orchestrator | 2025-07-06 20:10:10.797817 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-06 20:10:10.797829 | orchestrator | Sunday 06 July 2025 20:08:12 +0000 (0:00:01.259) 0:04:16.416 *********** 2025-07-06 20:10:10.797841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797859 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.797870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797882 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.797893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:10:10.797906 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.797916 | orchestrator | 2025-07-06 20:10:10.797926 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-06 20:10:10.797937 | orchestrator | Sunday 06 July 2025 20:08:14 +0000 (0:00:01.641) 0:04:18.058 *********** 2025-07-06 20:10:10.797948 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.797958 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.797969 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.797979 | orchestrator | 2025-07-06 20:10:10.797990 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-06 20:10:10.798013 | orchestrator | Sunday 06 July 2025 20:08:15 +0000 (0:00:01.235) 0:04:19.293 *********** 2025-07-06 20:10:10.798058 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.798068 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.798078 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.798088 | orchestrator | 2025-07-06 20:10:10.798098 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-06 20:10:10.798108 | orchestrator | Sunday 06 July 2025 20:08:18 +0000 (0:00:02.357) 0:04:21.650 *********** 2025-07-06 20:10:10.798119 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.798129 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.798139 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.798148 | orchestrator | 2025-07-06 20:10:10.798158 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-06 20:10:10.798168 | orchestrator | Sunday 06 July 2025 20:08:21 +0000 (0:00:03.083) 0:04:24.734 *********** 2025-07-06 20:10:10.798178 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-06 20:10:10.798188 | orchestrator | 2025-07-06 20:10:10.798197 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-06 20:10:10.798207 | orchestrator | Sunday 06 July 2025 20:08:22 +0000 (0:00:01.031) 0:04:25.765 *********** 2025-07-06 20:10:10.798217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:10:10.798234 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.798248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:10:10.798259 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.798269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:10:10.798279 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.798289 | orchestrator | 2025-07-06 20:10:10.798298 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-06 20:10:10.798308 | orchestrator | Sunday 06 July 2025 20:08:23 +0000 (0:00:00.983) 0:04:26.748 *********** 2025-07-06 20:10:10.798318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:10:10.798329 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.798339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:10:10.798349 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.798374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:10:10.798386 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.798396 | orchestrator | 2025-07-06 20:10:10.798407 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-06 20:10:10.798417 | orchestrator | Sunday 06 July 2025 20:08:24 +0000 (0:00:01.234) 0:04:27.983 *********** 2025-07-06 20:10:10.798428 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.798439 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.798449 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.798460 | orchestrator | 2025-07-06 20:10:10.798490 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-06 20:10:10.798501 | orchestrator | Sunday 06 July 2025 20:08:26 +0000 (0:00:01.741) 0:04:29.725 *********** 2025-07-06 20:10:10.798511 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.798520 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.798530 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.798540 | orchestrator | 2025-07-06 20:10:10.798550 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-06 20:10:10.798560 | orchestrator | Sunday 06 July 2025 20:08:28 +0000 (0:00:02.401) 0:04:32.127 *********** 2025-07-06 20:10:10.798570 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.798579 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.798589 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.798599 | orchestrator | 2025-07-06 20:10:10.798609 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-06 20:10:10.798619 | orchestrator | Sunday 06 July 2025 20:08:31 +0000 (0:00:03.259) 0:04:35.387 *********** 2025-07-06 20:10:10.798629 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.798639 | orchestrator | 2025-07-06 20:10:10.798649 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-06 20:10:10.798663 | orchestrator | Sunday 06 July 2025 20:08:33 +0000 (0:00:01.351) 0:04:36.738 *********** 2025-07-06 20:10:10.798674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.798686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:10:10.798697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.798756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.798767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:10:10.798777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.798831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.798842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:10:10.798858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.798890 | orchestrator | 2025-07-06 20:10:10.798901 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-06 20:10:10.798912 | orchestrator | Sunday 06 July 2025 20:08:37 +0000 (0:00:03.824) 0:04:40.562 *********** 2025-07-06 20:10:10.798937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.798958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:10:10.798978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.798989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.799000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.799010 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.799022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.799053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:10:10.799065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.799075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.799091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.799102 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.799113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.799124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:10:10.799155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.799167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:10:10.799178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:10:10.799189 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.799200 | orchestrator | 2025-07-06 20:10:10.799211 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-06 20:10:10.799225 | orchestrator | Sunday 06 July 2025 20:08:37 +0000 (0:00:00.694) 0:04:41.257 *********** 2025-07-06 20:10:10.799237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:10:10.799247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:10:10.799258 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.799269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:10:10.799279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:10:10.799290 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.799301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:10:10.799311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:10:10.799329 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.799339 | orchestrator | 2025-07-06 20:10:10.799350 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-06 20:10:10.799361 | orchestrator | Sunday 06 July 2025 20:08:38 +0000 (0:00:00.912) 0:04:42.169 *********** 2025-07-06 20:10:10.799371 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.799381 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.799392 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.799402 | orchestrator | 2025-07-06 20:10:10.799412 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-06 20:10:10.799423 | orchestrator | Sunday 06 July 2025 20:08:40 +0000 (0:00:01.975) 0:04:44.144 *********** 2025-07-06 20:10:10.799434 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.799444 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.799455 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.799512 | orchestrator | 2025-07-06 20:10:10.799523 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-06 20:10:10.799533 | orchestrator | Sunday 06 July 2025 20:08:42 +0000 (0:00:02.172) 0:04:46.317 *********** 2025-07-06 20:10:10.799544 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.799554 | orchestrator | 2025-07-06 20:10:10.799564 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-06 20:10:10.799573 | orchestrator | Sunday 06 July 2025 20:08:44 +0000 (0:00:01.304) 0:04:47.621 *********** 2025-07-06 20:10:10.799599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:10:10.799617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:10:10.799628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:10:10.799647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:10:10.799676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:10:10.799689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:10:10.799700 | orchestrator | 2025-07-06 20:10:10.799715 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-06 20:10:10.799727 | orchestrator | Sunday 06 July 2025 20:08:50 +0000 (0:00:06.072) 0:04:53.694 *********** 2025-07-06 20:10:10.799737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:10:10.799754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:10:10.799765 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.799793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:10:10.799806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:10:10.799818 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.799834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:10:10.799852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:10:10.799864 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.799875 | orchestrator | 2025-07-06 20:10:10.799886 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-06 20:10:10.799897 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:01.303) 0:04:54.997 *********** 2025-07-06 20:10:10.799908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-06 20:10:10.799933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:10:10.799946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:10:10.799957 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.799968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-06 20:10:10.799979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:10:10.799990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:10:10.800001 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.800012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-06 20:10:10.800023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:10:10.800044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:10:10.800056 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.800066 | orchestrator | 2025-07-06 20:10:10.800077 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-06 20:10:10.800088 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.811) 0:04:55.809 *********** 2025-07-06 20:10:10.800099 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.800110 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.800121 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.800132 | orchestrator | 2025-07-06 20:10:10.800143 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-06 20:10:10.800154 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.374) 0:04:56.184 *********** 2025-07-06 20:10:10.800164 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.800175 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.800186 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.800197 | orchestrator | 2025-07-06 20:10:10.800208 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-06 20:10:10.800219 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:01.226) 0:04:57.410 *********** 2025-07-06 20:10:10.800230 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.800241 | orchestrator | 2025-07-06 20:10:10.800252 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-06 20:10:10.800263 | orchestrator | Sunday 06 July 2025 20:08:55 +0000 (0:00:01.517) 0:04:58.928 *********** 2025-07-06 20:10:10.800274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:10:10.800299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:10:10.800312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:10:10.800373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:10:10.800385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:10:10.800441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:10:10.800484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:10:10.800537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:10:10.800559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:10:10.800611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:10:10.800628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:10:10.800677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:10:10.800694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800734 | orchestrator | 2025-07-06 20:10:10.800744 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-06 20:10:10.800755 | orchestrator | Sunday 06 July 2025 20:08:59 +0000 (0:00:04.047) 0:05:02.975 *********** 2025-07-06 20:10:10.800769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:10:10.800781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:10:10.800792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:10:10.800858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:10:10.800869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800908 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.800924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:10:10.800935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:10:10.800946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.800972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.800984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:10:10.801007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:10:10.801018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.801029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.801044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.801055 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.801066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:10:10.801077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:10:10.801098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.801114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.801126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.801141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:10:10.801153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:10:10.801163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.801243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:10:10.801260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:10:10.801270 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.801279 | orchestrator | 2025-07-06 20:10:10.801288 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-06 20:10:10.801298 | orchestrator | Sunday 06 July 2025 20:09:01 +0000 (0:00:01.598) 0:05:04.574 *********** 2025-07-06 20:10:10.801307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-06 20:10:10.801317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-06 20:10:10.801326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:10:10.801336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-06 20:10:10.801345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-06 20:10:10.801359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:10:10.801369 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.801378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:10:10.801388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:10:10.801396 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.801405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-06 20:10:10.801421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-06 20:10:10.801430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:10:10.801440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:10:10.801449 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.801458 | orchestrator | 2025-07-06 20:10:10.801483 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-06 20:10:10.801493 | orchestrator | Sunday 06 July 2025 20:09:02 +0000 (0:00:00.987) 0:05:05.561 *********** 2025-07-06 20:10:10.801501 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.801511 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.801520 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.801529 | orchestrator | 2025-07-06 20:10:10.801538 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-06 20:10:10.801547 | orchestrator | Sunday 06 July 2025 20:09:02 +0000 (0:00:00.406) 0:05:05.967 *********** 2025-07-06 20:10:10.801561 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.801570 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.801579 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.801588 | orchestrator | 2025-07-06 20:10:10.801597 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-06 20:10:10.801606 | orchestrator | Sunday 06 July 2025 20:09:04 +0000 (0:00:01.679) 0:05:07.647 *********** 2025-07-06 20:10:10.801616 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.801625 | orchestrator | 2025-07-06 20:10:10.801633 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-06 20:10:10.801643 | orchestrator | Sunday 06 July 2025 20:09:05 +0000 (0:00:01.721) 0:05:09.368 *********** 2025-07-06 20:10:10.801652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:10:10.801663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:10:10.801761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:10:10.801783 | orchestrator | 2025-07-06 20:10:10.801793 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-06 20:10:10.801802 | orchestrator | Sunday 06 July 2025 20:09:08 +0000 (0:00:02.423) 0:05:11.791 *********** 2025-07-06 20:10:10.801821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-06 20:10:10.801832 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.801841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-06 20:10:10.801856 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.801865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-06 20:10:10.801883 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.801893 | orchestrator | 2025-07-06 20:10:10.801902 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-06 20:10:10.801911 | orchestrator | Sunday 06 July 2025 20:09:08 +0000 (0:00:00.377) 0:05:12.169 *********** 2025-07-06 20:10:10.801921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-06 20:10:10.801932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-06 20:10:10.801941 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.801951 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.801960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-06 20:10:10.801971 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.801980 | orchestrator | 2025-07-06 20:10:10.801990 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-06 20:10:10.802000 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.939) 0:05:13.108 *********** 2025-07-06 20:10:10.802010 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802093 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802103 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802114 | orchestrator | 2025-07-06 20:10:10.802125 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-06 20:10:10.802136 | orchestrator | Sunday 06 July 2025 20:09:10 +0000 (0:00:00.367) 0:05:13.476 *********** 2025-07-06 20:10:10.802146 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802156 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802167 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802177 | orchestrator | 2025-07-06 20:10:10.802187 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-06 20:10:10.802205 | orchestrator | Sunday 06 July 2025 20:09:11 +0000 (0:00:01.089) 0:05:14.566 *********** 2025-07-06 20:10:10.802215 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:10:10.802225 | orchestrator | 2025-07-06 20:10:10.802235 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-06 20:10:10.802245 | orchestrator | Sunday 06 July 2025 20:09:12 +0000 (0:00:01.532) 0:05:16.098 *********** 2025-07-06 20:10:10.802256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.802280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.802292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.802304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.802322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.802342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-06 20:10:10.802353 | orchestrator | 2025-07-06 20:10:10.802363 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-06 20:10:10.802373 | orchestrator | Sunday 06 July 2025 20:09:18 +0000 (0:00:05.697) 0:05:21.795 *********** 2025-07-06 20:10:10.802383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.802394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.802404 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.802442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.802454 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.802536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-06 20:10:10.802546 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802556 | orchestrator | 2025-07-06 20:10:10.802567 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-06 20:10:10.802577 | orchestrator | Sunday 06 July 2025 20:09:18 +0000 (0:00:00.609) 0:05:22.405 *********** 2025-07-06 20:10:10.802588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802630 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802666 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:10:10.802698 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802704 | orchestrator | 2025-07-06 20:10:10.802710 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-06 20:10:10.802716 | orchestrator | Sunday 06 July 2025 20:09:20 +0000 (0:00:01.666) 0:05:24.071 *********** 2025-07-06 20:10:10.802723 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.802729 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.802735 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.802741 | orchestrator | 2025-07-06 20:10:10.802748 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-06 20:10:10.802754 | orchestrator | Sunday 06 July 2025 20:09:21 +0000 (0:00:01.289) 0:05:25.361 *********** 2025-07-06 20:10:10.802760 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.802766 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.802772 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.802779 | orchestrator | 2025-07-06 20:10:10.802785 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-06 20:10:10.802791 | orchestrator | Sunday 06 July 2025 20:09:24 +0000 (0:00:02.111) 0:05:27.472 *********** 2025-07-06 20:10:10.802797 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802803 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802816 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802822 | orchestrator | 2025-07-06 20:10:10.802829 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-06 20:10:10.802835 | orchestrator | Sunday 06 July 2025 20:09:24 +0000 (0:00:00.300) 0:05:27.773 *********** 2025-07-06 20:10:10.802841 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802847 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802853 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802859 | orchestrator | 2025-07-06 20:10:10.802866 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-06 20:10:10.802872 | orchestrator | Sunday 06 July 2025 20:09:24 +0000 (0:00:00.294) 0:05:28.067 *********** 2025-07-06 20:10:10.802878 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802884 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802890 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802896 | orchestrator | 2025-07-06 20:10:10.802903 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-06 20:10:10.802909 | orchestrator | Sunday 06 July 2025 20:09:25 +0000 (0:00:00.645) 0:05:28.712 *********** 2025-07-06 20:10:10.802924 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802935 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802944 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.802954 | orchestrator | 2025-07-06 20:10:10.802964 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-06 20:10:10.802973 | orchestrator | Sunday 06 July 2025 20:09:25 +0000 (0:00:00.304) 0:05:29.016 *********** 2025-07-06 20:10:10.802981 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.802990 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.802998 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803008 | orchestrator | 2025-07-06 20:10:10.803018 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-06 20:10:10.803028 | orchestrator | Sunday 06 July 2025 20:09:25 +0000 (0:00:00.261) 0:05:29.278 *********** 2025-07-06 20:10:10.803038 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803048 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803057 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803065 | orchestrator | 2025-07-06 20:10:10.803071 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-06 20:10:10.803076 | orchestrator | Sunday 06 July 2025 20:09:26 +0000 (0:00:00.681) 0:05:29.959 *********** 2025-07-06 20:10:10.803081 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803087 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803093 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803098 | orchestrator | 2025-07-06 20:10:10.803103 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-06 20:10:10.803109 | orchestrator | Sunday 06 July 2025 20:09:27 +0000 (0:00:00.641) 0:05:30.601 *********** 2025-07-06 20:10:10.803114 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803120 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803125 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803130 | orchestrator | 2025-07-06 20:10:10.803136 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-06 20:10:10.803141 | orchestrator | Sunday 06 July 2025 20:09:27 +0000 (0:00:00.276) 0:05:30.877 *********** 2025-07-06 20:10:10.803147 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803152 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803157 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803163 | orchestrator | 2025-07-06 20:10:10.803168 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-06 20:10:10.803177 | orchestrator | Sunday 06 July 2025 20:09:28 +0000 (0:00:00.798) 0:05:31.676 *********** 2025-07-06 20:10:10.803183 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803188 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803194 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803199 | orchestrator | 2025-07-06 20:10:10.803210 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-06 20:10:10.803215 | orchestrator | Sunday 06 July 2025 20:09:29 +0000 (0:00:01.216) 0:05:32.893 *********** 2025-07-06 20:10:10.803221 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803226 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803231 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803237 | orchestrator | 2025-07-06 20:10:10.803242 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-06 20:10:10.803248 | orchestrator | Sunday 06 July 2025 20:09:30 +0000 (0:00:00.804) 0:05:33.697 *********** 2025-07-06 20:10:10.803253 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.803259 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.803264 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.803269 | orchestrator | 2025-07-06 20:10:10.803275 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-06 20:10:10.803280 | orchestrator | Sunday 06 July 2025 20:09:38 +0000 (0:00:08.203) 0:05:41.901 *********** 2025-07-06 20:10:10.803285 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803291 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803296 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803302 | orchestrator | 2025-07-06 20:10:10.803307 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-06 20:10:10.803312 | orchestrator | Sunday 06 July 2025 20:09:39 +0000 (0:00:00.776) 0:05:42.677 *********** 2025-07-06 20:10:10.803318 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.803323 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.803329 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.803334 | orchestrator | 2025-07-06 20:10:10.803339 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-06 20:10:10.803345 | orchestrator | Sunday 06 July 2025 20:09:54 +0000 (0:00:14.866) 0:05:57.544 *********** 2025-07-06 20:10:10.803350 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803356 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803361 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803367 | orchestrator | 2025-07-06 20:10:10.803372 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-06 20:10:10.803377 | orchestrator | Sunday 06 July 2025 20:09:54 +0000 (0:00:00.799) 0:05:58.343 *********** 2025-07-06 20:10:10.803383 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:10:10.803388 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:10:10.803394 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:10:10.803399 | orchestrator | 2025-07-06 20:10:10.803404 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-06 20:10:10.803410 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:08.248) 0:06:06.592 *********** 2025-07-06 20:10:10.803415 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803421 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803426 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803431 | orchestrator | 2025-07-06 20:10:10.803437 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-06 20:10:10.803442 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:00.334) 0:06:06.926 *********** 2025-07-06 20:10:10.803448 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803453 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803459 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803479 | orchestrator | 2025-07-06 20:10:10.803485 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-06 20:10:10.803490 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:00.685) 0:06:07.612 *********** 2025-07-06 20:10:10.803496 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803501 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803511 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803517 | orchestrator | 2025-07-06 20:10:10.803523 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-06 20:10:10.803592 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:00.329) 0:06:07.941 *********** 2025-07-06 20:10:10.803599 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803605 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803611 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803616 | orchestrator | 2025-07-06 20:10:10.803622 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-06 20:10:10.803627 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:00.313) 0:06:08.255 *********** 2025-07-06 20:10:10.803632 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803638 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803643 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803649 | orchestrator | 2025-07-06 20:10:10.803654 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-06 20:10:10.803660 | orchestrator | Sunday 06 July 2025 20:10:05 +0000 (0:00:00.320) 0:06:08.575 *********** 2025-07-06 20:10:10.803665 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:10:10.803671 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:10:10.803676 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:10:10.803682 | orchestrator | 2025-07-06 20:10:10.803687 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-06 20:10:10.803693 | orchestrator | Sunday 06 July 2025 20:10:05 +0000 (0:00:00.693) 0:06:09.269 *********** 2025-07-06 20:10:10.803698 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803704 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803709 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803715 | orchestrator | 2025-07-06 20:10:10.803720 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-06 20:10:10.803726 | orchestrator | Sunday 06 July 2025 20:10:06 +0000 (0:00:00.862) 0:06:10.131 *********** 2025-07-06 20:10:10.803731 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:10:10.803737 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:10:10.803742 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:10:10.803748 | orchestrator | 2025-07-06 20:10:10.803753 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:10:10.803762 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-06 20:10:10.803769 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-06 20:10:10.803774 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-06 20:10:10.803780 | orchestrator | 2025-07-06 20:10:10.803785 | orchestrator | 2025-07-06 20:10:10.803791 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:10:10.803796 | orchestrator | Sunday 06 July 2025 20:10:07 +0000 (0:00:00.791) 0:06:10.923 *********** 2025-07-06 20:10:10.803802 | orchestrator | =============================================================================== 2025-07-06 20:10:10.803807 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.87s 2025-07-06 20:10:10.803813 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.25s 2025-07-06 20:10:10.803818 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.20s 2025-07-06 20:10:10.803823 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.47s 2025-07-06 20:10:10.803829 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.07s 2025-07-06 20:10:10.803834 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.70s 2025-07-06 20:10:10.803840 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.87s 2025-07-06 20:10:10.803845 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.75s 2025-07-06 20:10:10.803851 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.50s 2025-07-06 20:10:10.803860 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.47s 2025-07-06 20:10:10.803865 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.41s 2025-07-06 20:10:10.803871 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.37s 2025-07-06 20:10:10.803876 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.18s 2025-07-06 20:10:10.803882 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.13s 2025-07-06 20:10:10.803887 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.07s 2025-07-06 20:10:10.803892 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.05s 2025-07-06 20:10:10.803898 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.03s 2025-07-06 20:10:10.803903 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.91s 2025-07-06 20:10:10.803909 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.82s 2025-07-06 20:10:10.803914 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 3.67s 2025-07-06 20:10:10.803920 | orchestrator | 2025-07-06 20:10:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:13.833201 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:13.834813 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:13.837826 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:13.837872 | orchestrator | 2025-07-06 20:10:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:16.888481 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:16.890590 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:16.892980 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:16.893024 | orchestrator | 2025-07-06 20:10:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:19.928030 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:19.928128 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:19.928143 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:19.928498 | orchestrator | 2025-07-06 20:10:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:22.966666 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:22.967403 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:22.968586 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:22.968640 | orchestrator | 2025-07-06 20:10:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:26.021919 | orchestrator | 2025-07-06 20:10:26 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:26.022893 | orchestrator | 2025-07-06 20:10:26 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:26.023061 | orchestrator | 2025-07-06 20:10:26 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:26.023403 | orchestrator | 2025-07-06 20:10:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:29.060739 | orchestrator | 2025-07-06 20:10:29 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:29.061720 | orchestrator | 2025-07-06 20:10:29 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:29.062690 | orchestrator | 2025-07-06 20:10:29 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:29.062751 | orchestrator | 2025-07-06 20:10:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:32.108503 | orchestrator | 2025-07-06 20:10:32 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:32.108589 | orchestrator | 2025-07-06 20:10:32 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:32.108599 | orchestrator | 2025-07-06 20:10:32 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:32.108607 | orchestrator | 2025-07-06 20:10:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:35.148815 | orchestrator | 2025-07-06 20:10:35 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:35.153482 | orchestrator | 2025-07-06 20:10:35 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:35.154245 | orchestrator | 2025-07-06 20:10:35 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:35.155586 | orchestrator | 2025-07-06 20:10:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:38.197137 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:38.198782 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:38.200206 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:38.200235 | orchestrator | 2025-07-06 20:10:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:41.240151 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:41.240986 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:41.242501 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:41.242547 | orchestrator | 2025-07-06 20:10:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:44.288143 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:44.290415 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:44.292490 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:44.292773 | orchestrator | 2025-07-06 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:47.347420 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:47.349147 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:47.350767 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:47.350802 | orchestrator | 2025-07-06 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:50.406977 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:50.408648 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:50.409400 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:50.409895 | orchestrator | 2025-07-06 20:10:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:53.452888 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:53.453781 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:53.455119 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:53.455207 | orchestrator | 2025-07-06 20:10:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:56.510210 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:56.511239 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:56.513030 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:56.513065 | orchestrator | 2025-07-06 20:10:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:59.558593 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:10:59.560621 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:10:59.561921 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:10:59.561956 | orchestrator | 2025-07-06 20:10:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:02.618363 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:02.619240 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:02.621659 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:02.621755 | orchestrator | 2025-07-06 20:11:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:05.668249 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:05.669788 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:05.671450 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:05.671679 | orchestrator | 2025-07-06 20:11:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:08.723458 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:08.725186 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:08.726881 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:08.726949 | orchestrator | 2025-07-06 20:11:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:11.767081 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:11.767763 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:11.769087 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:11.769116 | orchestrator | 2025-07-06 20:11:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:14.824486 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:14.827256 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:14.828676 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:14.829366 | orchestrator | 2025-07-06 20:11:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:17.882289 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:17.888355 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:17.890259 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:17.890320 | orchestrator | 2025-07-06 20:11:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:20.936507 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:20.938728 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:20.940882 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:20.940906 | orchestrator | 2025-07-06 20:11:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:23.988485 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:23.990149 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:23.992134 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:23.992163 | orchestrator | 2025-07-06 20:11:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:27.054681 | orchestrator | 2025-07-06 20:11:27 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:27.056400 | orchestrator | 2025-07-06 20:11:27 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:27.058933 | orchestrator | 2025-07-06 20:11:27 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:27.058978 | orchestrator | 2025-07-06 20:11:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:30.106544 | orchestrator | 2025-07-06 20:11:30 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:30.107247 | orchestrator | 2025-07-06 20:11:30 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:30.110813 | orchestrator | 2025-07-06 20:11:30 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:30.110845 | orchestrator | 2025-07-06 20:11:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:33.159740 | orchestrator | 2025-07-06 20:11:33 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:33.161079 | orchestrator | 2025-07-06 20:11:33 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:33.163028 | orchestrator | 2025-07-06 20:11:33 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:33.163394 | orchestrator | 2025-07-06 20:11:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:36.209076 | orchestrator | 2025-07-06 20:11:36 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:36.211183 | orchestrator | 2025-07-06 20:11:36 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:36.213584 | orchestrator | 2025-07-06 20:11:36 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:36.215030 | orchestrator | 2025-07-06 20:11:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:39.252567 | orchestrator | 2025-07-06 20:11:39 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:39.253010 | orchestrator | 2025-07-06 20:11:39 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:39.254604 | orchestrator | 2025-07-06 20:11:39 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:39.254639 | orchestrator | 2025-07-06 20:11:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:42.302496 | orchestrator | 2025-07-06 20:11:42 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:42.305253 | orchestrator | 2025-07-06 20:11:42 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:42.307779 | orchestrator | 2025-07-06 20:11:42 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:42.307818 | orchestrator | 2025-07-06 20:11:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:45.352907 | orchestrator | 2025-07-06 20:11:45 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:45.357583 | orchestrator | 2025-07-06 20:11:45 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:45.361110 | orchestrator | 2025-07-06 20:11:45 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:45.361140 | orchestrator | 2025-07-06 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:48.406116 | orchestrator | 2025-07-06 20:11:48 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:48.410690 | orchestrator | 2025-07-06 20:11:48 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:48.410747 | orchestrator | 2025-07-06 20:11:48 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:48.410757 | orchestrator | 2025-07-06 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:51.453194 | orchestrator | 2025-07-06 20:11:51 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:51.456955 | orchestrator | 2025-07-06 20:11:51 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:51.459575 | orchestrator | 2025-07-06 20:11:51 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:51.459621 | orchestrator | 2025-07-06 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:54.505888 | orchestrator | 2025-07-06 20:11:54 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:54.507559 | orchestrator | 2025-07-06 20:11:54 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:54.509628 | orchestrator | 2025-07-06 20:11:54 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:54.509670 | orchestrator | 2025-07-06 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:57.555886 | orchestrator | 2025-07-06 20:11:57 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:11:57.558148 | orchestrator | 2025-07-06 20:11:57 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state STARTED 2025-07-06 20:11:57.559881 | orchestrator | 2025-07-06 20:11:57 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:11:57.559959 | orchestrator | 2025-07-06 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:00.616292 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:00.620869 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task cc960135-7277-4b53-aaf6-14b21ffe1e27 is in state SUCCESS 2025-07-06 20:12:00.624163 | orchestrator | 2025-07-06 20:12:00.624219 | orchestrator | 2025-07-06 20:12:00.624232 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-06 20:12:00.624245 | orchestrator | 2025-07-06 20:12:00.624257 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-06 20:12:00.624269 | orchestrator | Sunday 06 July 2025 20:01:14 +0000 (0:00:00.711) 0:00:00.711 *********** 2025-07-06 20:12:00.624281 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.624294 | orchestrator | 2025-07-06 20:12:00.624305 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-06 20:12:00.624316 | orchestrator | Sunday 06 July 2025 20:01:15 +0000 (0:00:01.214) 0:00:01.925 *********** 2025-07-06 20:12:00.624327 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.624340 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.624351 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.624362 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.624373 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.624636 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.624655 | orchestrator | 2025-07-06 20:12:00.624667 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-06 20:12:00.624678 | orchestrator | Sunday 06 July 2025 20:01:17 +0000 (0:00:01.865) 0:00:03.791 *********** 2025-07-06 20:12:00.624689 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.624700 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.624711 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.624722 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.624863 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.624879 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.624894 | orchestrator | 2025-07-06 20:12:00.624913 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-06 20:12:00.624943 | orchestrator | Sunday 06 July 2025 20:01:18 +0000 (0:00:00.709) 0:00:04.500 *********** 2025-07-06 20:12:00.624963 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.624998 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.625016 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.625036 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.625053 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.625066 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.625079 | orchestrator | 2025-07-06 20:12:00.625092 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-06 20:12:00.625105 | orchestrator | Sunday 06 July 2025 20:01:19 +0000 (0:00:00.889) 0:00:05.389 *********** 2025-07-06 20:12:00.625116 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.625127 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.625165 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.625177 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.625188 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.625198 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.625209 | orchestrator | 2025-07-06 20:12:00.625220 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-06 20:12:00.625244 | orchestrator | Sunday 06 July 2025 20:01:19 +0000 (0:00:00.738) 0:00:06.128 *********** 2025-07-06 20:12:00.625256 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.625266 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.625277 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.625287 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.625298 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.625308 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.625319 | orchestrator | 2025-07-06 20:12:00.625330 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-06 20:12:00.625341 | orchestrator | Sunday 06 July 2025 20:01:20 +0000 (0:00:00.623) 0:00:06.751 *********** 2025-07-06 20:12:00.625351 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.625362 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.625373 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.625408 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.625419 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.625430 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.625440 | orchestrator | 2025-07-06 20:12:00.625452 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-06 20:12:00.625463 | orchestrator | Sunday 06 July 2025 20:01:21 +0000 (0:00:01.088) 0:00:07.839 *********** 2025-07-06 20:12:00.625474 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.625486 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.625497 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.625546 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.625584 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.625597 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.625607 | orchestrator | 2025-07-06 20:12:00.625619 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-06 20:12:00.625630 | orchestrator | Sunday 06 July 2025 20:01:22 +0000 (0:00:00.778) 0:00:08.618 *********** 2025-07-06 20:12:00.625641 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.625652 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.625663 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.625673 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.625720 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.625733 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.625744 | orchestrator | 2025-07-06 20:12:00.625756 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-06 20:12:00.625766 | orchestrator | Sunday 06 July 2025 20:01:23 +0000 (0:00:01.020) 0:00:09.638 *********** 2025-07-06 20:12:00.625778 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:12:00.625789 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:12:00.625890 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:12:00.625903 | orchestrator | 2025-07-06 20:12:00.625914 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-06 20:12:00.625925 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:00.555) 0:00:10.194 *********** 2025-07-06 20:12:00.625935 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.625946 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.625957 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.625968 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.625978 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.625989 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.626000 | orchestrator | 2025-07-06 20:12:00.626130 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-06 20:12:00.626196 | orchestrator | Sunday 06 July 2025 20:01:24 +0000 (0:00:00.897) 0:00:11.091 *********** 2025-07-06 20:12:00.626209 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:12:00.626221 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:12:00.626232 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:12:00.626243 | orchestrator | 2025-07-06 20:12:00.626254 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-06 20:12:00.626265 | orchestrator | Sunday 06 July 2025 20:01:28 +0000 (0:00:03.138) 0:00:14.230 *********** 2025-07-06 20:12:00.626276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:12:00.626287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:12:00.626298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:12:00.626309 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.626319 | orchestrator | 2025-07-06 20:12:00.626330 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-06 20:12:00.626341 | orchestrator | Sunday 06 July 2025 20:01:28 +0000 (0:00:00.844) 0:00:15.074 *********** 2025-07-06 20:12:00.626355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626413 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.626424 | orchestrator | 2025-07-06 20:12:00.626435 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-06 20:12:00.626446 | orchestrator | Sunday 06 July 2025 20:01:29 +0000 (0:00:00.970) 0:00:16.045 *********** 2025-07-06 20:12:00.626466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626504 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.626516 | orchestrator | 2025-07-06 20:12:00.626526 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-06 20:12:00.626577 | orchestrator | Sunday 06 July 2025 20:01:30 +0000 (0:00:00.398) 0:00:16.443 *********** 2025-07-06 20:12:00.626739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-06 20:01:25.752123', 'end': '2025-07-06 20:01:26.030660', 'delta': '0:00:00.278537', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-06 20:01:26.738129', 'end': '2025-07-06 20:01:26.988240', 'delta': '0:00:00.250111', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-06 20:01:27.514616', 'end': '2025-07-06 20:01:27.791451', 'delta': '0:00:00.276835', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.626812 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.626823 | orchestrator | 2025-07-06 20:12:00.626834 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-06 20:12:00.626846 | orchestrator | Sunday 06 July 2025 20:01:30 +0000 (0:00:00.219) 0:00:16.663 *********** 2025-07-06 20:12:00.626857 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.626868 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.626879 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.626890 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.626951 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.626963 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.626974 | orchestrator | 2025-07-06 20:12:00.627025 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-06 20:12:00.627074 | orchestrator | Sunday 06 July 2025 20:01:32 +0000 (0:00:01.648) 0:00:18.313 *********** 2025-07-06 20:12:00.627088 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.627099 | orchestrator | 2025-07-06 20:12:00.627110 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-06 20:12:00.627122 | orchestrator | Sunday 06 July 2025 20:01:32 +0000 (0:00:00.779) 0:00:19.093 *********** 2025-07-06 20:12:00.627132 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627143 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.627154 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.627165 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.627176 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.627187 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.627198 | orchestrator | 2025-07-06 20:12:00.627209 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-06 20:12:00.627228 | orchestrator | Sunday 06 July 2025 20:01:33 +0000 (0:00:01.038) 0:00:20.131 *********** 2025-07-06 20:12:00.627239 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627250 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.627261 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.627283 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.627294 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.627305 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.627316 | orchestrator | 2025-07-06 20:12:00.627510 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:12:00.627524 | orchestrator | Sunday 06 July 2025 20:01:35 +0000 (0:00:01.161) 0:00:21.293 *********** 2025-07-06 20:12:00.627535 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627574 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.627649 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.627663 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.627674 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.627685 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.627696 | orchestrator | 2025-07-06 20:12:00.627707 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-06 20:12:00.627718 | orchestrator | Sunday 06 July 2025 20:01:35 +0000 (0:00:00.685) 0:00:21.979 *********** 2025-07-06 20:12:00.627729 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627740 | orchestrator | 2025-07-06 20:12:00.627751 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-06 20:12:00.627762 | orchestrator | Sunday 06 July 2025 20:01:35 +0000 (0:00:00.119) 0:00:22.098 *********** 2025-07-06 20:12:00.627773 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627784 | orchestrator | 2025-07-06 20:12:00.627795 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:12:00.627806 | orchestrator | Sunday 06 July 2025 20:01:36 +0000 (0:00:00.372) 0:00:22.470 *********** 2025-07-06 20:12:00.627816 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627827 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.627874 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.627887 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.627899 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.627910 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.627921 | orchestrator | 2025-07-06 20:12:00.627951 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-06 20:12:00.627963 | orchestrator | Sunday 06 July 2025 20:01:37 +0000 (0:00:00.764) 0:00:23.235 *********** 2025-07-06 20:12:00.627974 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.627985 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.627996 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.628007 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.628018 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.628028 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.628039 | orchestrator | 2025-07-06 20:12:00.628050 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-06 20:12:00.628061 | orchestrator | Sunday 06 July 2025 20:01:38 +0000 (0:00:01.047) 0:00:24.282 *********** 2025-07-06 20:12:00.628072 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.628083 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.628094 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.628104 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.628115 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.628126 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.628137 | orchestrator | 2025-07-06 20:12:00.628148 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-06 20:12:00.628159 | orchestrator | Sunday 06 July 2025 20:01:38 +0000 (0:00:00.861) 0:00:25.144 *********** 2025-07-06 20:12:00.628170 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.628190 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.628201 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.628212 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.628222 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.628233 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.628244 | orchestrator | 2025-07-06 20:12:00.628255 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-06 20:12:00.628266 | orchestrator | Sunday 06 July 2025 20:01:40 +0000 (0:00:01.078) 0:00:26.222 *********** 2025-07-06 20:12:00.628277 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.628288 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.628299 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.628309 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.628320 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.628331 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.628342 | orchestrator | 2025-07-06 20:12:00.628353 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-06 20:12:00.628364 | orchestrator | Sunday 06 July 2025 20:01:40 +0000 (0:00:00.592) 0:00:26.815 *********** 2025-07-06 20:12:00.628375 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.628437 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.628448 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.628459 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.628470 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.628481 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.628492 | orchestrator | 2025-07-06 20:12:00.628509 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-06 20:12:00.628521 | orchestrator | Sunday 06 July 2025 20:01:41 +0000 (0:00:00.792) 0:00:27.607 *********** 2025-07-06 20:12:00.628532 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.628543 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.628554 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.628564 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.628575 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.628586 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.628597 | orchestrator | 2025-07-06 20:12:00.628608 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-06 20:12:00.628619 | orchestrator | Sunday 06 July 2025 20:01:42 +0000 (0:00:00.723) 0:00:28.331 *********** 2025-07-06 20:12:00.628631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3', 'dm-uuid-LVM-d7HjWU3JzXeSeQbjfc2n9Yi9OGiYQHwxPT90GOkoOAxFv9UtUw1qQalfE6UDZoVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa', 'dm-uuid-LVM-8M7FNHYgTDJ9A4eglNQUhos7W2WwexO36l6gjlXuHie3wt9U7ZPzJLjsWM5gLxG0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.628787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xm31dv-gyCR-GRcW-qog6-fURB-MiId-z72Sxq', 'scsi-0QEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960', 'scsi-SQEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.628804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-61Xthh-rbJ0-B71E-GmRZ-6SSd-Wz1L-h1cJu7', 'scsi-0QEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d', 'scsi-SQEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.628815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23', 'dm-uuid-LVM-QfX16kVcdVYnqzdCOCVjaqNxpgP4soHxJE8lczAYT7NweX7RTBI5cncey0TFLr60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1', 'scsi-SQEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.628855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.628866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d', 'dm-uuid-LVM-CA1Wfim9SpDpxBtKo1BwTB5y8rmoIm3RXYW2SxLOg9CT7NfGhrhf8NOuQriXg0QO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628954 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.628964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.628996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-53lk1J-mHQQ-paPR-nldo-PB2W-6kAU-0TGfMM', 'scsi-0QEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c', 'scsi-SQEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1eTAzL-LZpg-Kw21-QsDF-KF5N-hqpe-hB04d2', 'scsi-0QEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2', 'scsi-SQEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155', 'scsi-SQEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4', 'dm-uuid-LVM-I5ATjPgkR63NkWUiDD1bjVOQFzhFfRUcotxcS8zflvAYkHLilg6Wke1DJ5epgIrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df', 'dm-uuid-LVM-bdIz1aaEKdbNyRiBnwwOSbuQhj8IhhO6l6FvchNFMc6smPYfiWBRhLZKf4KLrJzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629181 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.629197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yx5XFo-M4DJ-bRrP-qvbI-GdzE-w8dn-bShrLr', 'scsi-0QEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7', 'scsi-SQEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kg4Teq-63G8-2Kkl-7gz1-J35t-HfCp-q0Kknc', 'scsi-0QEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b', 'scsi-SQEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c', 'scsi-SQEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part1', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part14', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part15', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part16', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629424 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.629434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629569 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.629579 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.629589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:12:00.629681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part1', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part14', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part15', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part16', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:12:00.629715 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.629725 | orchestrator | 2025-07-06 20:12:00.629736 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-06 20:12:00.629752 | orchestrator | Sunday 06 July 2025 20:01:43 +0000 (0:00:01.071) 0:00:29.402 *********** 2025-07-06 20:12:00.629769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3', 'dm-uuid-LVM-d7HjWU3JzXeSeQbjfc2n9Yi9OGiYQHwxPT90GOkoOAxFv9UtUw1qQalfE6UDZoVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa', 'dm-uuid-LVM-8M7FNHYgTDJ9A4eglNQUhos7W2WwexO36l6gjlXuHie3wt9U7ZPzJLjsWM5gLxG0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629852 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23', 'dm-uuid-LVM-QfX16kVcdVYnqzdCOCVjaqNxpgP4soHxJE8lczAYT7NweX7RTBI5cncey0TFLr60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d', 'dm-uuid-LVM-CA1Wfim9SpDpxBtKo1BwTB5y8rmoIm3RXYW2SxLOg9CT7NfGhrhf8NOuQriXg0QO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.629998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4', 'dm-uuid-LVM-I5ATjPgkR63NkWUiDD1bjVOQFzhFfRUcotxcS8zflvAYkHLilg6Wke1DJ5epgIrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df', 'dm-uuid-LVM-bdIz1aaEKdbNyRiBnwwOSbuQhj8IhhO6l6FvchNFMc6smPYfiWBRhLZKf4KLrJzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xm31dv-gyCR-GRcW-qog6-fURB-MiId-z72Sxq', 'scsi-0QEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960', 'scsi-SQEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630734 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-61Xthh-rbJ0-B71E-GmRZ-6SSd-Wz1L-h1cJu7', 'scsi-0QEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d', 'scsi-SQEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1', 'scsi-SQEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-53lk1J-mHQQ-paPR-nldo-PB2W-6kAU-0TGfMM', 'scsi-0QEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c', 'scsi-SQEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1eTAzL-LZpg-Kw21-QsDF-KF5N-hqpe-hB04d2', 'scsi-0QEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2', 'scsi-SQEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155', 'scsi-SQEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.630983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yx5XFo-M4DJ-bRrP-qvbI-GdzE-w8dn-bShrLr', 'scsi-0QEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7', 'scsi-SQEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kg4Teq-63G8-2Kkl-7gz1-J35t-HfCp-q0Kknc', 'scsi-0QEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b', 'scsi-SQEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c', 'scsi-SQEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631053 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631094 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631104 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631132 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631148 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part1', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part14', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part15', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part16', 'scsi-SQEMU_QEMU_HARDDISK_1eb046de-56ce-4fec-94aa-451822a3ca91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631175 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.631218 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631237 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631251 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631279 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631292 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631321 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631338 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631357 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea2d9aa9-10cd-4961-88d7-4a8638c93c01-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631370 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631453 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.631466 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.631478 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.631490 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.631506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631517 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631526 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631540 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631558 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631575 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631585 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631598 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part1', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part14', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part15', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part16', 'scsi-SQEMU_QEMU_HARDDISK_0815eb16-c1f1-4b6f-b81a-a7126aeb6273-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631607 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:12:00.631621 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.631630 | orchestrator | 2025-07-06 20:12:00.631639 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-06 20:12:00.631648 | orchestrator | Sunday 06 July 2025 20:01:45 +0000 (0:00:02.153) 0:00:31.556 *********** 2025-07-06 20:12:00.631660 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.631669 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.631677 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.631685 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.631693 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.631701 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.631709 | orchestrator | 2025-07-06 20:12:00.631717 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-06 20:12:00.631725 | orchestrator | Sunday 06 July 2025 20:01:46 +0000 (0:00:01.229) 0:00:32.786 *********** 2025-07-06 20:12:00.631733 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.631741 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.631749 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.631757 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.631765 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.631773 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.631781 | orchestrator | 2025-07-06 20:12:00.631789 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:12:00.631797 | orchestrator | Sunday 06 July 2025 20:01:47 +0000 (0:00:00.564) 0:00:33.350 *********** 2025-07-06 20:12:00.631805 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.631813 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.631821 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.631828 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.631836 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.631844 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.631852 | orchestrator | 2025-07-06 20:12:00.631860 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:12:00.631868 | orchestrator | Sunday 06 July 2025 20:01:48 +0000 (0:00:01.298) 0:00:34.648 *********** 2025-07-06 20:12:00.631876 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.631884 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.631892 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.631900 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.631908 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.631916 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.631924 | orchestrator | 2025-07-06 20:12:00.631932 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:12:00.631940 | orchestrator | Sunday 06 July 2025 20:01:49 +0000 (0:00:01.102) 0:00:35.751 *********** 2025-07-06 20:12:00.631948 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.631956 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.631964 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.631972 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.631980 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.631994 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.632007 | orchestrator | 2025-07-06 20:12:00.632022 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:12:00.632041 | orchestrator | Sunday 06 July 2025 20:01:50 +0000 (0:00:00.817) 0:00:36.569 *********** 2025-07-06 20:12:00.632056 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632077 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.632095 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.632108 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.632122 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.632136 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.632150 | orchestrator | 2025-07-06 20:12:00.632164 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-06 20:12:00.632179 | orchestrator | Sunday 06 July 2025 20:01:51 +0000 (0:00:00.662) 0:00:37.232 *********** 2025-07-06 20:12:00.632195 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-06 20:12:00.632206 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-06 20:12:00.632214 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-06 20:12:00.632222 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-06 20:12:00.632230 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-06 20:12:00.632237 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-06 20:12:00.632245 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-06 20:12:00.632253 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:12:00.632261 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-06 20:12:00.632269 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-06 20:12:00.632277 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-06 20:12:00.632285 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-06 20:12:00.632292 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-06 20:12:00.632300 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-06 20:12:00.632308 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-06 20:12:00.632316 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-06 20:12:00.632324 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-06 20:12:00.632332 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-06 20:12:00.632339 | orchestrator | 2025-07-06 20:12:00.632347 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-06 20:12:00.632355 | orchestrator | Sunday 06 July 2025 20:01:52 +0000 (0:00:01.946) 0:00:39.178 *********** 2025-07-06 20:12:00.632363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:12:00.632371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:12:00.632397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:12:00.632405 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-06 20:12:00.632421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-06 20:12:00.632429 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-06 20:12:00.632437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-06 20:12:00.632445 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-06 20:12:00.632453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-06 20:12:00.632467 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.632476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:12:00.632483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:12:00.632491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:12:00.632499 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.632507 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-06 20:12:00.632515 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-06 20:12:00.632523 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-06 20:12:00.632531 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.632539 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.632555 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-06 20:12:00.632563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-06 20:12:00.632571 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-06 20:12:00.632579 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.632587 | orchestrator | 2025-07-06 20:12:00.632595 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-06 20:12:00.632603 | orchestrator | Sunday 06 July 2025 20:01:53 +0000 (0:00:00.941) 0:00:40.120 *********** 2025-07-06 20:12:00.632611 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.632619 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.632627 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.632636 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.632644 | orchestrator | 2025-07-06 20:12:00.632652 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-06 20:12:00.632660 | orchestrator | Sunday 06 July 2025 20:01:55 +0000 (0:00:01.236) 0:00:41.356 *********** 2025-07-06 20:12:00.632668 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632676 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.632684 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.632691 | orchestrator | 2025-07-06 20:12:00.632699 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-06 20:12:00.632707 | orchestrator | Sunday 06 July 2025 20:01:55 +0000 (0:00:00.548) 0:00:41.905 *********** 2025-07-06 20:12:00.632715 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632723 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.632731 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.632739 | orchestrator | 2025-07-06 20:12:00.632747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-06 20:12:00.632755 | orchestrator | Sunday 06 July 2025 20:01:56 +0000 (0:00:00.970) 0:00:42.875 *********** 2025-07-06 20:12:00.632763 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632775 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.632783 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.632791 | orchestrator | 2025-07-06 20:12:00.632799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-06 20:12:00.632807 | orchestrator | Sunday 06 July 2025 20:01:57 +0000 (0:00:00.482) 0:00:43.358 *********** 2025-07-06 20:12:00.632814 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.632822 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.632830 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.632838 | orchestrator | 2025-07-06 20:12:00.632846 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-06 20:12:00.632854 | orchestrator | Sunday 06 July 2025 20:01:57 +0000 (0:00:00.712) 0:00:44.070 *********** 2025-07-06 20:12:00.632862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.632870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.632878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.632886 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632894 | orchestrator | 2025-07-06 20:12:00.632902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-06 20:12:00.632910 | orchestrator | Sunday 06 July 2025 20:01:58 +0000 (0:00:00.364) 0:00:44.435 *********** 2025-07-06 20:12:00.632918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.632926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.632933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.632941 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.632949 | orchestrator | 2025-07-06 20:12:00.632957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-06 20:12:00.632970 | orchestrator | Sunday 06 July 2025 20:01:58 +0000 (0:00:00.321) 0:00:44.756 *********** 2025-07-06 20:12:00.632978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.632986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.632994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.633002 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.633010 | orchestrator | 2025-07-06 20:12:00.633018 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-06 20:12:00.633026 | orchestrator | Sunday 06 July 2025 20:01:59 +0000 (0:00:00.446) 0:00:45.203 *********** 2025-07-06 20:12:00.633033 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.633041 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.633049 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.633057 | orchestrator | 2025-07-06 20:12:00.633065 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-06 20:12:00.633073 | orchestrator | Sunday 06 July 2025 20:01:59 +0000 (0:00:00.339) 0:00:45.543 *********** 2025-07-06 20:12:00.633081 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:12:00.633089 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:12:00.633096 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-06 20:12:00.633104 | orchestrator | 2025-07-06 20:12:00.633117 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-06 20:12:00.633125 | orchestrator | Sunday 06 July 2025 20:02:00 +0000 (0:00:00.857) 0:00:46.400 *********** 2025-07-06 20:12:00.633133 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:12:00.633141 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:12:00.633149 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:12:00.633157 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:12:00.633165 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:12:00.633173 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:12:00.633181 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:12:00.633189 | orchestrator | 2025-07-06 20:12:00.633196 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-06 20:12:00.633204 | orchestrator | Sunday 06 July 2025 20:02:01 +0000 (0:00:00.959) 0:00:47.360 *********** 2025-07-06 20:12:00.633212 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:12:00.633220 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:12:00.633228 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:12:00.633236 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:12:00.633244 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:12:00.633252 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:12:00.633260 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:12:00.633268 | orchestrator | 2025-07-06 20:12:00.633276 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.633284 | orchestrator | Sunday 06 July 2025 20:02:03 +0000 (0:00:02.137) 0:00:49.498 *********** 2025-07-06 20:12:00.633292 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.633300 | orchestrator | 2025-07-06 20:12:00.633308 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.633321 | orchestrator | Sunday 06 July 2025 20:02:04 +0000 (0:00:01.346) 0:00:50.844 *********** 2025-07-06 20:12:00.633333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.633342 | orchestrator | 2025-07-06 20:12:00.633349 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.633357 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:01.039) 0:00:51.883 *********** 2025-07-06 20:12:00.633365 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.633373 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.633396 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.633404 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.633412 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.633420 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.633428 | orchestrator | 2025-07-06 20:12:00.633436 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.633444 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:01.028) 0:00:52.911 *********** 2025-07-06 20:12:00.633452 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.633460 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.633468 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.633476 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.633484 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.633492 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.633500 | orchestrator | 2025-07-06 20:12:00.633508 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.633516 | orchestrator | Sunday 06 July 2025 20:02:07 +0000 (0:00:00.913) 0:00:53.825 *********** 2025-07-06 20:12:00.633524 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.633532 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.633540 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.633547 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.633555 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.633563 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.633571 | orchestrator | 2025-07-06 20:12:00.633579 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.633587 | orchestrator | Sunday 06 July 2025 20:02:08 +0000 (0:00:00.919) 0:00:54.744 *********** 2025-07-06 20:12:00.633595 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.633603 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.633611 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.633619 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.633626 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.633634 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.633642 | orchestrator | 2025-07-06 20:12:00.633650 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.633658 | orchestrator | Sunday 06 July 2025 20:02:09 +0000 (0:00:00.751) 0:00:55.496 *********** 2025-07-06 20:12:00.633666 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.633674 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.633682 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.633690 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.633697 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.633705 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.633713 | orchestrator | 2025-07-06 20:12:00.633721 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.633734 | orchestrator | Sunday 06 July 2025 20:02:10 +0000 (0:00:01.086) 0:00:56.582 *********** 2025-07-06 20:12:00.633742 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.633750 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.633758 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.633766 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.633774 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.633817 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.633825 | orchestrator | 2025-07-06 20:12:00.633833 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.633841 | orchestrator | Sunday 06 July 2025 20:02:11 +0000 (0:00:00.712) 0:00:57.294 *********** 2025-07-06 20:12:00.633849 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.633857 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.633865 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.633873 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.633881 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.633889 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.633897 | orchestrator | 2025-07-06 20:12:00.633905 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.633913 | orchestrator | Sunday 06 July 2025 20:02:11 +0000 (0:00:00.698) 0:00:57.993 *********** 2025-07-06 20:12:00.633921 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.633929 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.633937 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.633945 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.633952 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.633960 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.633968 | orchestrator | 2025-07-06 20:12:00.633976 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.633984 | orchestrator | Sunday 06 July 2025 20:02:12 +0000 (0:00:00.962) 0:00:58.955 *********** 2025-07-06 20:12:00.633992 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.634000 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.634008 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.634180 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.634194 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.634202 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.634210 | orchestrator | 2025-07-06 20:12:00.634219 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.634227 | orchestrator | Sunday 06 July 2025 20:02:14 +0000 (0:00:01.535) 0:01:00.491 *********** 2025-07-06 20:12:00.634235 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.634243 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.634251 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.634259 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.634267 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.634275 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.634283 | orchestrator | 2025-07-06 20:12:00.634291 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.634299 | orchestrator | Sunday 06 July 2025 20:02:14 +0000 (0:00:00.610) 0:01:01.101 *********** 2025-07-06 20:12:00.634312 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.634320 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.634328 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.634336 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.634344 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.634352 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.634360 | orchestrator | 2025-07-06 20:12:00.634368 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.634423 | orchestrator | Sunday 06 July 2025 20:02:16 +0000 (0:00:01.233) 0:01:02.334 *********** 2025-07-06 20:12:00.634433 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.634441 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.634449 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.634457 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.634465 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.634473 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.634481 | orchestrator | 2025-07-06 20:12:00.634489 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.634497 | orchestrator | Sunday 06 July 2025 20:02:16 +0000 (0:00:00.556) 0:01:02.891 *********** 2025-07-06 20:12:00.634514 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.634522 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.634530 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.634537 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.634545 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.634553 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.634561 | orchestrator | 2025-07-06 20:12:00.634569 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.634577 | orchestrator | Sunday 06 July 2025 20:02:17 +0000 (0:00:00.844) 0:01:03.736 *********** 2025-07-06 20:12:00.634585 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.634593 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.634601 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.634609 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.634617 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.634625 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.634632 | orchestrator | 2025-07-06 20:12:00.634640 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.634648 | orchestrator | Sunday 06 July 2025 20:02:18 +0000 (0:00:00.670) 0:01:04.406 *********** 2025-07-06 20:12:00.634656 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.634664 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.634672 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.634679 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.634687 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.634695 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.634703 | orchestrator | 2025-07-06 20:12:00.634711 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.634719 | orchestrator | Sunday 06 July 2025 20:02:19 +0000 (0:00:00.877) 0:01:05.284 *********** 2025-07-06 20:12:00.634726 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.634734 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.634742 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.634750 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.634757 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.634764 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.634771 | orchestrator | 2025-07-06 20:12:00.634805 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.634815 | orchestrator | Sunday 06 July 2025 20:02:20 +0000 (0:00:00.937) 0:01:06.222 *********** 2025-07-06 20:12:00.634823 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.634831 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.634838 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.634846 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.634853 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.634861 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.634869 | orchestrator | 2025-07-06 20:12:00.634877 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.634884 | orchestrator | Sunday 06 July 2025 20:02:20 +0000 (0:00:00.804) 0:01:07.026 *********** 2025-07-06 20:12:00.634892 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.634900 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.634907 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.634915 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.634923 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.634930 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.634938 | orchestrator | 2025-07-06 20:12:00.634946 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.634954 | orchestrator | Sunday 06 July 2025 20:02:21 +0000 (0:00:00.629) 0:01:07.655 *********** 2025-07-06 20:12:00.634961 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.634969 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.634976 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.634984 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.634998 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.635006 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.635013 | orchestrator | 2025-07-06 20:12:00.635021 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-06 20:12:00.635028 | orchestrator | Sunday 06 July 2025 20:02:22 +0000 (0:00:01.199) 0:01:08.855 *********** 2025-07-06 20:12:00.635036 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.635044 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.635052 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.635059 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.635067 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.635074 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.635082 | orchestrator | 2025-07-06 20:12:00.635089 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-06 20:12:00.635100 | orchestrator | Sunday 06 July 2025 20:02:24 +0000 (0:00:01.539) 0:01:10.395 *********** 2025-07-06 20:12:00.635112 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.635124 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.635131 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.635137 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.635144 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.635151 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.635157 | orchestrator | 2025-07-06 20:12:00.635164 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-06 20:12:00.635178 | orchestrator | Sunday 06 July 2025 20:02:26 +0000 (0:00:01.850) 0:01:12.245 *********** 2025-07-06 20:12:00.635185 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.635193 | orchestrator | 2025-07-06 20:12:00.635200 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-06 20:12:00.635207 | orchestrator | Sunday 06 July 2025 20:02:27 +0000 (0:00:01.183) 0:01:13.429 *********** 2025-07-06 20:12:00.635213 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.635220 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.635226 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.635233 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.635240 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.635246 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.635253 | orchestrator | 2025-07-06 20:12:00.635260 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-06 20:12:00.635266 | orchestrator | Sunday 06 July 2025 20:02:27 +0000 (0:00:00.749) 0:01:14.178 *********** 2025-07-06 20:12:00.635273 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.635279 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.635286 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.635293 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.635299 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.635306 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.635313 | orchestrator | 2025-07-06 20:12:00.635319 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-06 20:12:00.635326 | orchestrator | Sunday 06 July 2025 20:02:28 +0000 (0:00:00.566) 0:01:14.744 *********** 2025-07-06 20:12:00.635333 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:12:00.635339 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:12:00.635346 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:12:00.635352 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:12:00.635359 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:12:00.635366 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:12:00.635393 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:12:00.635400 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:12:00.635407 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:12:00.635414 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:12:00.635420 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:12:00.635448 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:12:00.635456 | orchestrator | 2025-07-06 20:12:00.635463 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-06 20:12:00.635470 | orchestrator | Sunday 06 July 2025 20:02:29 +0000 (0:00:01.421) 0:01:16.166 *********** 2025-07-06 20:12:00.635476 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.635483 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.635490 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.635496 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.635503 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.635510 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.635517 | orchestrator | 2025-07-06 20:12:00.635523 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-06 20:12:00.635530 | orchestrator | Sunday 06 July 2025 20:02:30 +0000 (0:00:00.861) 0:01:17.027 *********** 2025-07-06 20:12:00.635537 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.635544 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.635550 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.635557 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.635564 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.635570 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.635577 | orchestrator | 2025-07-06 20:12:00.635584 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-06 20:12:00.635591 | orchestrator | Sunday 06 July 2025 20:02:31 +0000 (0:00:00.628) 0:01:17.656 *********** 2025-07-06 20:12:00.635597 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.635604 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.635611 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.635617 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.635624 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.635631 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.635637 | orchestrator | 2025-07-06 20:12:00.635644 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-06 20:12:00.635651 | orchestrator | Sunday 06 July 2025 20:02:31 +0000 (0:00:00.461) 0:01:18.117 *********** 2025-07-06 20:12:00.635658 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.635664 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.635671 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.635677 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.635684 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.635691 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.635697 | orchestrator | 2025-07-06 20:12:00.635704 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-06 20:12:00.635711 | orchestrator | Sunday 06 July 2025 20:02:32 +0000 (0:00:00.628) 0:01:18.746 *********** 2025-07-06 20:12:00.635722 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.635729 | orchestrator | 2025-07-06 20:12:00.635736 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-06 20:12:00.635742 | orchestrator | Sunday 06 July 2025 20:02:33 +0000 (0:00:00.959) 0:01:19.705 *********** 2025-07-06 20:12:00.635749 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.635763 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.635769 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.635776 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.635783 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.635790 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.635796 | orchestrator | 2025-07-06 20:12:00.635803 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-06 20:12:00.635810 | orchestrator | Sunday 06 July 2025 20:03:48 +0000 (0:01:14.594) 0:02:34.299 *********** 2025-07-06 20:12:00.635817 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:12:00.635824 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:12:00.635831 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:12:00.635837 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.635844 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:12:00.635851 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:12:00.635858 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:12:00.635864 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.635871 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:12:00.635878 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:12:00.635884 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:12:00.635891 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.635898 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:12:00.635905 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:12:00.635911 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:12:00.635918 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.635925 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:12:00.635931 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:12:00.635938 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:12:00.635945 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.635952 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:12:00.635976 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:12:00.635984 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:12:00.635991 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.635997 | orchestrator | 2025-07-06 20:12:00.636004 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-06 20:12:00.636011 | orchestrator | Sunday 06 July 2025 20:03:48 +0000 (0:00:00.844) 0:02:35.144 *********** 2025-07-06 20:12:00.636017 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636024 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636031 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636037 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636044 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636051 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636058 | orchestrator | 2025-07-06 20:12:00.636064 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-06 20:12:00.636071 | orchestrator | Sunday 06 July 2025 20:03:49 +0000 (0:00:00.572) 0:02:35.716 *********** 2025-07-06 20:12:00.636078 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636084 | orchestrator | 2025-07-06 20:12:00.636091 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-06 20:12:00.636103 | orchestrator | Sunday 06 July 2025 20:03:49 +0000 (0:00:00.143) 0:02:35.859 *********** 2025-07-06 20:12:00.636110 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636116 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636123 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636130 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636136 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636143 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636150 | orchestrator | 2025-07-06 20:12:00.636156 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-06 20:12:00.636163 | orchestrator | Sunday 06 July 2025 20:03:50 +0000 (0:00:00.957) 0:02:36.817 *********** 2025-07-06 20:12:00.636170 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636177 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636183 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636190 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636197 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636203 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636210 | orchestrator | 2025-07-06 20:12:00.636216 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-06 20:12:00.636223 | orchestrator | Sunday 06 July 2025 20:03:51 +0000 (0:00:00.830) 0:02:37.647 *********** 2025-07-06 20:12:00.636230 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636237 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636243 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636250 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636256 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636263 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636270 | orchestrator | 2025-07-06 20:12:00.636276 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-06 20:12:00.636287 | orchestrator | Sunday 06 July 2025 20:03:52 +0000 (0:00:00.943) 0:02:38.591 *********** 2025-07-06 20:12:00.636294 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.636301 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.636307 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.636314 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.636321 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.636327 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.636334 | orchestrator | 2025-07-06 20:12:00.636341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-06 20:12:00.636348 | orchestrator | Sunday 06 July 2025 20:03:54 +0000 (0:00:02.535) 0:02:41.127 *********** 2025-07-06 20:12:00.636354 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.636361 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.636368 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.636374 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.636395 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.636402 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.636409 | orchestrator | 2025-07-06 20:12:00.636415 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-06 20:12:00.636422 | orchestrator | Sunday 06 July 2025 20:03:55 +0000 (0:00:00.737) 0:02:41.865 *********** 2025-07-06 20:12:00.636429 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.636437 | orchestrator | 2025-07-06 20:12:00.636444 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-06 20:12:00.636451 | orchestrator | Sunday 06 July 2025 20:03:56 +0000 (0:00:01.145) 0:02:43.010 *********** 2025-07-06 20:12:00.636458 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636464 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636471 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636477 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636484 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636499 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636506 | orchestrator | 2025-07-06 20:12:00.636513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-06 20:12:00.636519 | orchestrator | Sunday 06 July 2025 20:03:57 +0000 (0:00:00.695) 0:02:43.706 *********** 2025-07-06 20:12:00.636526 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636533 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636539 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636546 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636553 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636559 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636566 | orchestrator | 2025-07-06 20:12:00.636572 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-06 20:12:00.636579 | orchestrator | Sunday 06 July 2025 20:03:58 +0000 (0:00:00.729) 0:02:44.435 *********** 2025-07-06 20:12:00.636586 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636592 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636599 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636606 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636613 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636638 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636646 | orchestrator | 2025-07-06 20:12:00.636653 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-06 20:12:00.636660 | orchestrator | Sunday 06 July 2025 20:03:58 +0000 (0:00:00.603) 0:02:45.039 *********** 2025-07-06 20:12:00.636666 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636673 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636680 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636686 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636693 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636700 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636707 | orchestrator | 2025-07-06 20:12:00.636713 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-06 20:12:00.636720 | orchestrator | Sunday 06 July 2025 20:03:59 +0000 (0:00:00.799) 0:02:45.838 *********** 2025-07-06 20:12:00.636727 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636734 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636740 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636747 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636754 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636760 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636767 | orchestrator | 2025-07-06 20:12:00.636774 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-06 20:12:00.636780 | orchestrator | Sunday 06 July 2025 20:04:00 +0000 (0:00:00.679) 0:02:46.518 *********** 2025-07-06 20:12:00.636787 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636794 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636800 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636807 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636814 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636820 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636827 | orchestrator | 2025-07-06 20:12:00.636834 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-06 20:12:00.636841 | orchestrator | Sunday 06 July 2025 20:04:01 +0000 (0:00:00.887) 0:02:47.405 *********** 2025-07-06 20:12:00.636847 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636854 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636861 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636868 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636875 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636882 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636888 | orchestrator | 2025-07-06 20:12:00.636895 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-06 20:12:00.636907 | orchestrator | Sunday 06 July 2025 20:04:01 +0000 (0:00:00.571) 0:02:47.977 *********** 2025-07-06 20:12:00.636913 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.636920 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.636927 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.636933 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.636940 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.636947 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.636954 | orchestrator | 2025-07-06 20:12:00.636964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-06 20:12:00.636971 | orchestrator | Sunday 06 July 2025 20:04:02 +0000 (0:00:00.787) 0:02:48.765 *********** 2025-07-06 20:12:00.636977 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.636984 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.636991 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.636998 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.637004 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.637011 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.637018 | orchestrator | 2025-07-06 20:12:00.637025 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-06 20:12:00.637031 | orchestrator | Sunday 06 July 2025 20:04:03 +0000 (0:00:01.196) 0:02:49.962 *********** 2025-07-06 20:12:00.637038 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.637045 | orchestrator | 2025-07-06 20:12:00.637052 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-06 20:12:00.637059 | orchestrator | Sunday 06 July 2025 20:04:04 +0000 (0:00:01.127) 0:02:51.089 *********** 2025-07-06 20:12:00.637066 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-06 20:12:00.637072 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-06 20:12:00.637079 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-06 20:12:00.637086 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-06 20:12:00.637093 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-06 20:12:00.637099 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-06 20:12:00.637106 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-06 20:12:00.637113 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-06 20:12:00.637120 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-06 20:12:00.637126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-06 20:12:00.637133 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-06 20:12:00.637140 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-06 20:12:00.637146 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-06 20:12:00.637153 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-06 20:12:00.637160 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-06 20:12:00.637167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-06 20:12:00.637173 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-06 20:12:00.637180 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-06 20:12:00.637187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-06 20:12:00.637194 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-06 20:12:00.637217 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-06 20:12:00.637225 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-06 20:12:00.637232 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-06 20:12:00.637239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-06 20:12:00.637246 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-06 20:12:00.637257 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-06 20:12:00.637264 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-06 20:12:00.637271 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-06 20:12:00.637278 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-06 20:12:00.637285 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-06 20:12:00.637291 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-06 20:12:00.637298 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-06 20:12:00.637305 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-06 20:12:00.637311 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-06 20:12:00.637318 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-06 20:12:00.637325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-06 20:12:00.637332 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-06 20:12:00.637338 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-06 20:12:00.637345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-06 20:12:00.637352 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:12:00.637371 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-06 20:12:00.637412 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:12:00.637419 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:12:00.637426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-06 20:12:00.637433 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-06 20:12:00.637440 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:12:00.637446 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:12:00.637453 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:12:00.637460 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:12:00.637470 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:12:00.637477 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:12:00.637484 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:12:00.637491 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:12:00.637497 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:12:00.637504 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:12:00.637511 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:12:00.637518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:12:00.637525 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:12:00.637531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:12:00.637539 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:12:00.637546 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:12:00.637552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:12:00.637559 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:12:00.637566 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:12:00.637572 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:12:00.637579 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:12:00.637590 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:12:00.637597 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:12:00.637604 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:12:00.637611 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:12:00.637618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:12:00.637625 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:12:00.637631 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:12:00.637637 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:12:00.637643 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:12:00.637650 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:12:00.637656 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:12:00.637662 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:12:00.637687 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:12:00.637695 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-06 20:12:00.637701 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:12:00.637707 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:12:00.637714 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-06 20:12:00.637720 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-06 20:12:00.637726 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-06 20:12:00.637732 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:12:00.637739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:12:00.637745 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:12:00.637751 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-06 20:12:00.637758 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-06 20:12:00.637764 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-06 20:12:00.637770 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-06 20:12:00.637776 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-06 20:12:00.637782 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-06 20:12:00.637789 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-06 20:12:00.637795 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-06 20:12:00.637801 | orchestrator | 2025-07-06 20:12:00.637807 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-06 20:12:00.637814 | orchestrator | Sunday 06 July 2025 20:04:11 +0000 (0:00:06.232) 0:02:57.322 *********** 2025-07-06 20:12:00.637820 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.637826 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.637832 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.637839 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.637845 | orchestrator | 2025-07-06 20:12:00.637851 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-06 20:12:00.637858 | orchestrator | Sunday 06 July 2025 20:04:12 +0000 (0:00:00.919) 0:02:58.242 *********** 2025-07-06 20:12:00.637864 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.637874 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.637886 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.637893 | orchestrator | 2025-07-06 20:12:00.637899 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-06 20:12:00.637905 | orchestrator | Sunday 06 July 2025 20:04:12 +0000 (0:00:00.675) 0:02:58.917 *********** 2025-07-06 20:12:00.637912 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.637918 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.637924 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.637931 | orchestrator | 2025-07-06 20:12:00.637937 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-06 20:12:00.637943 | orchestrator | Sunday 06 July 2025 20:04:14 +0000 (0:00:01.438) 0:03:00.355 *********** 2025-07-06 20:12:00.637949 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.637956 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.637962 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.637968 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.637974 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.637981 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.637987 | orchestrator | 2025-07-06 20:12:00.637993 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-06 20:12:00.637999 | orchestrator | Sunday 06 July 2025 20:04:14 +0000 (0:00:00.493) 0:03:00.849 *********** 2025-07-06 20:12:00.638006 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.638012 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.638041 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.638047 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638053 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638059 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638066 | orchestrator | 2025-07-06 20:12:00.638072 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-06 20:12:00.638078 | orchestrator | Sunday 06 July 2025 20:04:15 +0000 (0:00:00.590) 0:03:01.440 *********** 2025-07-06 20:12:00.638084 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638091 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638097 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638103 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638109 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638115 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638122 | orchestrator | 2025-07-06 20:12:00.638128 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-06 20:12:00.638134 | orchestrator | Sunday 06 July 2025 20:04:15 +0000 (0:00:00.624) 0:03:02.064 *********** 2025-07-06 20:12:00.638158 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638165 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638172 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638178 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638184 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638190 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638197 | orchestrator | 2025-07-06 20:12:00.638203 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-06 20:12:00.638209 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:00.680) 0:03:02.744 *********** 2025-07-06 20:12:00.638216 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638222 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638228 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638234 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638245 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638252 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638258 | orchestrator | 2025-07-06 20:12:00.638264 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-06 20:12:00.638271 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.520) 0:03:03.265 *********** 2025-07-06 20:12:00.638277 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638283 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638290 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638296 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638302 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638308 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638314 | orchestrator | 2025-07-06 20:12:00.638321 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-06 20:12:00.638327 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.567) 0:03:03.832 *********** 2025-07-06 20:12:00.638333 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638339 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638346 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638352 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638358 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638364 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638370 | orchestrator | 2025-07-06 20:12:00.638394 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-06 20:12:00.638401 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.537) 0:03:04.370 *********** 2025-07-06 20:12:00.638407 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638413 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638419 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638426 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638432 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638438 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638444 | orchestrator | 2025-07-06 20:12:00.638451 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-06 20:12:00.638462 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.672) 0:03:05.042 *********** 2025-07-06 20:12:00.638469 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638475 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638482 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638488 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.638494 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.638500 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.638507 | orchestrator | 2025-07-06 20:12:00.638513 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-06 20:12:00.638519 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:02.532) 0:03:07.574 *********** 2025-07-06 20:12:00.638526 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.638532 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.638538 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.638544 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638550 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638556 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638563 | orchestrator | 2025-07-06 20:12:00.638569 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-06 20:12:00.638575 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.594) 0:03:08.168 *********** 2025-07-06 20:12:00.638582 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.638588 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.638594 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.638601 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638607 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638613 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638625 | orchestrator | 2025-07-06 20:12:00.638631 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-06 20:12:00.638637 | orchestrator | Sunday 06 July 2025 20:04:22 +0000 (0:00:00.497) 0:03:08.665 *********** 2025-07-06 20:12:00.638644 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638650 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638656 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638662 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638668 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638674 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638681 | orchestrator | 2025-07-06 20:12:00.638687 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-06 20:12:00.638693 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.762) 0:03:09.428 *********** 2025-07-06 20:12:00.638700 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.638706 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.638716 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.638727 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638738 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638753 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638767 | orchestrator | 2025-07-06 20:12:00.638806 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-06 20:12:00.638817 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.522) 0:03:09.950 *********** 2025-07-06 20:12:00.638828 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-06 20:12:00.638839 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-06 20:12:00.638849 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-06 20:12:00.638859 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-06 20:12:00.638869 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638879 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-06 20:12:00.638895 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-06 20:12:00.638905 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638922 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638928 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638934 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638941 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.638947 | orchestrator | 2025-07-06 20:12:00.638953 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-06 20:12:00.638959 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:00.657) 0:03:10.608 *********** 2025-07-06 20:12:00.638965 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.638972 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.638978 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.638984 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.638990 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.638996 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639002 | orchestrator | 2025-07-06 20:12:00.639009 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-06 20:12:00.639015 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:00.501) 0:03:11.110 *********** 2025-07-06 20:12:00.639021 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639027 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.639033 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.639040 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639046 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639052 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639058 | orchestrator | 2025-07-06 20:12:00.639064 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-06 20:12:00.639071 | orchestrator | Sunday 06 July 2025 20:04:25 +0000 (0:00:00.682) 0:03:11.792 *********** 2025-07-06 20:12:00.639077 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639083 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.639089 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.639095 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639101 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639107 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639114 | orchestrator | 2025-07-06 20:12:00.639120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-06 20:12:00.639126 | orchestrator | Sunday 06 July 2025 20:04:26 +0000 (0:00:00.519) 0:03:12.312 *********** 2025-07-06 20:12:00.639132 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639138 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.639145 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.639151 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639157 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639163 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639169 | orchestrator | 2025-07-06 20:12:00.639176 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-06 20:12:00.639182 | orchestrator | Sunday 06 July 2025 20:04:26 +0000 (0:00:00.699) 0:03:13.012 *********** 2025-07-06 20:12:00.639188 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639215 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.639222 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.639229 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639235 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639241 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639247 | orchestrator | 2025-07-06 20:12:00.639254 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-06 20:12:00.639260 | orchestrator | Sunday 06 July 2025 20:04:27 +0000 (0:00:00.634) 0:03:13.647 *********** 2025-07-06 20:12:00.639266 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.639272 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.639279 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.639285 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639291 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639302 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639309 | orchestrator | 2025-07-06 20:12:00.639315 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-06 20:12:00.639322 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:01.030) 0:03:14.677 *********** 2025-07-06 20:12:00.639328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.639334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.639340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.639347 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639353 | orchestrator | 2025-07-06 20:12:00.639359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-06 20:12:00.639365 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:00.399) 0:03:15.077 *********** 2025-07-06 20:12:00.639371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.639402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.639413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.639420 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639427 | orchestrator | 2025-07-06 20:12:00.639433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-06 20:12:00.639439 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.404) 0:03:15.481 *********** 2025-07-06 20:12:00.639445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.639452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.639458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.639464 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639470 | orchestrator | 2025-07-06 20:12:00.639476 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-06 20:12:00.639482 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.421) 0:03:15.903 *********** 2025-07-06 20:12:00.639489 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.639495 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.639505 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.639511 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639518 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639524 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639530 | orchestrator | 2025-07-06 20:12:00.639537 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-06 20:12:00.639543 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:00.614) 0:03:16.518 *********** 2025-07-06 20:12:00.639549 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:12:00.639555 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:12:00.639562 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-06 20:12:00.639568 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-06 20:12:00.639575 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639581 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-06 20:12:00.639587 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.639593 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-06 20:12:00.639599 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.639605 | orchestrator | 2025-07-06 20:12:00.639611 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-06 20:12:00.639618 | orchestrator | Sunday 06 July 2025 20:04:32 +0000 (0:00:02.222) 0:03:18.740 *********** 2025-07-06 20:12:00.639624 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.639630 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.639636 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.639642 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.639648 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.639655 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.639667 | orchestrator | 2025-07-06 20:12:00.639674 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:12:00.639680 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:02.779) 0:03:21.520 *********** 2025-07-06 20:12:00.639686 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.639692 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.639698 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.639705 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.639711 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.639717 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.639723 | orchestrator | 2025-07-06 20:12:00.639729 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-06 20:12:00.639736 | orchestrator | Sunday 06 July 2025 20:04:36 +0000 (0:00:01.134) 0:03:22.654 *********** 2025-07-06 20:12:00.639742 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.639748 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.639754 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.639761 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.639767 | orchestrator | 2025-07-06 20:12:00.639773 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-06 20:12:00.639779 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:00.731) 0:03:23.386 *********** 2025-07-06 20:12:00.639786 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.639792 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.639798 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.639804 | orchestrator | 2025-07-06 20:12:00.639831 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-06 20:12:00.639838 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:00.262) 0:03:23.648 *********** 2025-07-06 20:12:00.639845 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.639851 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.639857 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.639863 | orchestrator | 2025-07-06 20:12:00.639870 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-06 20:12:00.639876 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:01.261) 0:03:24.910 *********** 2025-07-06 20:12:00.639882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:12:00.639888 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:12:00.639895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:12:00.639901 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.639907 | orchestrator | 2025-07-06 20:12:00.639917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-06 20:12:00.639931 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:00.573) 0:03:25.483 *********** 2025-07-06 20:12:00.639947 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.639957 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.639968 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.639979 | orchestrator | 2025-07-06 20:12:00.639989 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-06 20:12:00.640001 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:00.276) 0:03:25.760 *********** 2025-07-06 20:12:00.640012 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.640024 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.640034 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.640045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.640052 | orchestrator | 2025-07-06 20:12:00.640059 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-06 20:12:00.640065 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.847) 0:03:26.607 *********** 2025-07-06 20:12:00.640071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.640085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.640091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.640098 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640104 | orchestrator | 2025-07-06 20:12:00.640110 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-06 20:12:00.640116 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.335) 0:03:26.943 *********** 2025-07-06 20:12:00.640123 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640129 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.640139 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.640145 | orchestrator | 2025-07-06 20:12:00.640151 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-06 20:12:00.640157 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.243) 0:03:27.186 *********** 2025-07-06 20:12:00.640164 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640170 | orchestrator | 2025-07-06 20:12:00.640176 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-06 20:12:00.640182 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.173) 0:03:27.360 *********** 2025-07-06 20:12:00.640188 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640194 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.640201 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.640207 | orchestrator | 2025-07-06 20:12:00.640213 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-06 20:12:00.640219 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.253) 0:03:27.613 *********** 2025-07-06 20:12:00.640225 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640232 | orchestrator | 2025-07-06 20:12:00.640238 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-06 20:12:00.640244 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.200) 0:03:27.814 *********** 2025-07-06 20:12:00.640250 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640256 | orchestrator | 2025-07-06 20:12:00.640263 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-06 20:12:00.640272 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.189) 0:03:28.004 *********** 2025-07-06 20:12:00.640282 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640292 | orchestrator | 2025-07-06 20:12:00.640301 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-06 20:12:00.640311 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.239) 0:03:28.244 *********** 2025-07-06 20:12:00.640321 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640330 | orchestrator | 2025-07-06 20:12:00.640342 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-06 20:12:00.640353 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.187) 0:03:28.432 *********** 2025-07-06 20:12:00.640362 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640372 | orchestrator | 2025-07-06 20:12:00.640395 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-06 20:12:00.640401 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.220) 0:03:28.652 *********** 2025-07-06 20:12:00.640408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.640414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.640420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.640426 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640433 | orchestrator | 2025-07-06 20:12:00.640439 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-06 20:12:00.640445 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.362) 0:03:29.015 *********** 2025-07-06 20:12:00.640451 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640484 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.640492 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.640505 | orchestrator | 2025-07-06 20:12:00.640511 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-06 20:12:00.640517 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.304) 0:03:29.319 *********** 2025-07-06 20:12:00.640524 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640530 | orchestrator | 2025-07-06 20:12:00.640536 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-06 20:12:00.640542 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.184) 0:03:29.504 *********** 2025-07-06 20:12:00.640549 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640555 | orchestrator | 2025-07-06 20:12:00.640561 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-06 20:12:00.640567 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.181) 0:03:29.686 *********** 2025-07-06 20:12:00.640573 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.640580 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.640586 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.640592 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.640598 | orchestrator | 2025-07-06 20:12:00.640605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-06 20:12:00.640611 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:01.231) 0:03:30.917 *********** 2025-07-06 20:12:00.640617 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.640623 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.640630 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.640636 | orchestrator | 2025-07-06 20:12:00.640642 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-06 20:12:00.640649 | orchestrator | Sunday 06 July 2025 20:04:45 +0000 (0:00:00.317) 0:03:31.235 *********** 2025-07-06 20:12:00.640655 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.640661 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.640667 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.640673 | orchestrator | 2025-07-06 20:12:00.640680 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-06 20:12:00.640686 | orchestrator | Sunday 06 July 2025 20:04:46 +0000 (0:00:01.208) 0:03:32.443 *********** 2025-07-06 20:12:00.640692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.640698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.640704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.640710 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640717 | orchestrator | 2025-07-06 20:12:00.640723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-06 20:12:00.640729 | orchestrator | Sunday 06 July 2025 20:04:47 +0000 (0:00:01.087) 0:03:33.531 *********** 2025-07-06 20:12:00.640739 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.640746 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.640752 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.640758 | orchestrator | 2025-07-06 20:12:00.640764 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-06 20:12:00.640771 | orchestrator | Sunday 06 July 2025 20:04:47 +0000 (0:00:00.341) 0:03:33.873 *********** 2025-07-06 20:12:00.640777 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.640783 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.640789 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.640796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.640802 | orchestrator | 2025-07-06 20:12:00.640808 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-06 20:12:00.640814 | orchestrator | Sunday 06 July 2025 20:04:48 +0000 (0:00:00.971) 0:03:34.844 *********** 2025-07-06 20:12:00.640821 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.640827 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.640838 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.640844 | orchestrator | 2025-07-06 20:12:00.640850 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-06 20:12:00.640857 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:00.344) 0:03:35.189 *********** 2025-07-06 20:12:00.640863 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.640869 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.640875 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.640881 | orchestrator | 2025-07-06 20:12:00.640888 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-06 20:12:00.640894 | orchestrator | Sunday 06 July 2025 20:04:50 +0000 (0:00:01.258) 0:03:36.448 *********** 2025-07-06 20:12:00.640900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.640906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.640912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.640919 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640925 | orchestrator | 2025-07-06 20:12:00.640931 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-06 20:12:00.640937 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.801) 0:03:37.249 *********** 2025-07-06 20:12:00.640943 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.640950 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.640956 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.640962 | orchestrator | 2025-07-06 20:12:00.640968 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-06 20:12:00.640975 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.327) 0:03:37.577 *********** 2025-07-06 20:12:00.640981 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.640987 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.640993 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.640999 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641006 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641012 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641018 | orchestrator | 2025-07-06 20:12:00.641024 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-06 20:12:00.641048 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.835) 0:03:38.412 *********** 2025-07-06 20:12:00.641056 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.641062 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.641068 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.641075 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.641081 | orchestrator | 2025-07-06 20:12:00.641087 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-06 20:12:00.641094 | orchestrator | Sunday 06 July 2025 20:04:53 +0000 (0:00:01.008) 0:03:39.421 *********** 2025-07-06 20:12:00.641100 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641106 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641112 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641118 | orchestrator | 2025-07-06 20:12:00.641125 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-06 20:12:00.641131 | orchestrator | Sunday 06 July 2025 20:04:53 +0000 (0:00:00.338) 0:03:39.759 *********** 2025-07-06 20:12:00.641137 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.641143 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.641149 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.641155 | orchestrator | 2025-07-06 20:12:00.641161 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-06 20:12:00.641168 | orchestrator | Sunday 06 July 2025 20:04:54 +0000 (0:00:01.179) 0:03:40.939 *********** 2025-07-06 20:12:00.641174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:12:00.641180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:12:00.641191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:12:00.641198 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641204 | orchestrator | 2025-07-06 20:12:00.641210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-06 20:12:00.641216 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.811) 0:03:41.751 *********** 2025-07-06 20:12:00.641222 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641228 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641235 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641241 | orchestrator | 2025-07-06 20:12:00.641247 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-06 20:12:00.641253 | orchestrator | 2025-07-06 20:12:00.641259 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.641265 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.888) 0:03:42.640 *********** 2025-07-06 20:12:00.641272 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.641278 | orchestrator | 2025-07-06 20:12:00.641284 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.641294 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.652) 0:03:43.292 *********** 2025-07-06 20:12:00.641301 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.641307 | orchestrator | 2025-07-06 20:12:00.641313 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.641319 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.913) 0:03:44.206 *********** 2025-07-06 20:12:00.641326 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641332 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641338 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641344 | orchestrator | 2025-07-06 20:12:00.641350 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.641357 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.697) 0:03:44.903 *********** 2025-07-06 20:12:00.641363 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641369 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641411 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641418 | orchestrator | 2025-07-06 20:12:00.641425 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.641431 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.300) 0:03:45.203 *********** 2025-07-06 20:12:00.641437 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641444 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641450 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641456 | orchestrator | 2025-07-06 20:12:00.641462 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.641469 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.292) 0:03:45.495 *********** 2025-07-06 20:12:00.641475 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641481 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641487 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641493 | orchestrator | 2025-07-06 20:12:00.641499 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.641506 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.555) 0:03:46.051 *********** 2025-07-06 20:12:00.641512 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641518 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641524 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641531 | orchestrator | 2025-07-06 20:12:00.641537 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.641543 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.709) 0:03:46.760 *********** 2025-07-06 20:12:00.641549 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641562 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641569 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641575 | orchestrator | 2025-07-06 20:12:00.641581 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.641587 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.313) 0:03:47.073 *********** 2025-07-06 20:12:00.641594 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641600 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641606 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641612 | orchestrator | 2025-07-06 20:12:00.641639 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.641648 | orchestrator | Sunday 06 July 2025 20:05:01 +0000 (0:00:00.302) 0:03:47.376 *********** 2025-07-06 20:12:00.641654 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641660 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641667 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641673 | orchestrator | 2025-07-06 20:12:00.641679 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.641685 | orchestrator | Sunday 06 July 2025 20:05:02 +0000 (0:00:01.055) 0:03:48.431 *********** 2025-07-06 20:12:00.641692 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641698 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641704 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641710 | orchestrator | 2025-07-06 20:12:00.641716 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.641723 | orchestrator | Sunday 06 July 2025 20:05:02 +0000 (0:00:00.679) 0:03:49.111 *********** 2025-07-06 20:12:00.641729 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641735 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641741 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641747 | orchestrator | 2025-07-06 20:12:00.641754 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.641760 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:00.277) 0:03:49.388 *********** 2025-07-06 20:12:00.641766 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641772 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641779 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.641785 | orchestrator | 2025-07-06 20:12:00.641791 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.641797 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:00.317) 0:03:49.706 *********** 2025-07-06 20:12:00.641804 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641810 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641816 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641822 | orchestrator | 2025-07-06 20:12:00.641828 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.641835 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.520) 0:03:50.227 *********** 2025-07-06 20:12:00.641841 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641847 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641853 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641860 | orchestrator | 2025-07-06 20:12:00.641866 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.641872 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.307) 0:03:50.534 *********** 2025-07-06 20:12:00.641878 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641884 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641890 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641895 | orchestrator | 2025-07-06 20:12:00.641901 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.641912 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.282) 0:03:50.816 *********** 2025-07-06 20:12:00.641917 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641923 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641933 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641938 | orchestrator | 2025-07-06 20:12:00.641944 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.641949 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.302) 0:03:51.118 *********** 2025-07-06 20:12:00.641955 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.641960 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.641966 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.641971 | orchestrator | 2025-07-06 20:12:00.641977 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.641982 | orchestrator | Sunday 06 July 2025 20:05:05 +0000 (0:00:00.561) 0:03:51.679 *********** 2025-07-06 20:12:00.641988 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.641993 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.641998 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642004 | orchestrator | 2025-07-06 20:12:00.642009 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.642046 | orchestrator | Sunday 06 July 2025 20:05:05 +0000 (0:00:00.322) 0:03:52.001 *********** 2025-07-06 20:12:00.642052 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642058 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642063 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642069 | orchestrator | 2025-07-06 20:12:00.642074 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.642080 | orchestrator | Sunday 06 July 2025 20:05:06 +0000 (0:00:00.342) 0:03:52.344 *********** 2025-07-06 20:12:00.642085 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642091 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642096 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642101 | orchestrator | 2025-07-06 20:12:00.642107 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-06 20:12:00.642112 | orchestrator | Sunday 06 July 2025 20:05:06 +0000 (0:00:00.727) 0:03:53.072 *********** 2025-07-06 20:12:00.642118 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642123 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642128 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642134 | orchestrator | 2025-07-06 20:12:00.642139 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-06 20:12:00.642145 | orchestrator | Sunday 06 July 2025 20:05:07 +0000 (0:00:00.327) 0:03:53.399 *********** 2025-07-06 20:12:00.642150 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.642156 | orchestrator | 2025-07-06 20:12:00.642161 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-06 20:12:00.642167 | orchestrator | Sunday 06 July 2025 20:05:07 +0000 (0:00:00.560) 0:03:53.959 *********** 2025-07-06 20:12:00.642172 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.642178 | orchestrator | 2025-07-06 20:12:00.642183 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-06 20:12:00.642206 | orchestrator | Sunday 06 July 2025 20:05:07 +0000 (0:00:00.162) 0:03:54.122 *********** 2025-07-06 20:12:00.642213 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-06 20:12:00.642218 | orchestrator | 2025-07-06 20:12:00.642224 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-06 20:12:00.642229 | orchestrator | Sunday 06 July 2025 20:05:09 +0000 (0:00:01.498) 0:03:55.621 *********** 2025-07-06 20:12:00.642235 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642240 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642246 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642251 | orchestrator | 2025-07-06 20:12:00.642256 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-06 20:12:00.642262 | orchestrator | Sunday 06 July 2025 20:05:09 +0000 (0:00:00.347) 0:03:55.969 *********** 2025-07-06 20:12:00.642267 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642273 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642284 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642289 | orchestrator | 2025-07-06 20:12:00.642295 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-06 20:12:00.642300 | orchestrator | Sunday 06 July 2025 20:05:10 +0000 (0:00:00.437) 0:03:56.406 *********** 2025-07-06 20:12:00.642306 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642311 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642317 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642322 | orchestrator | 2025-07-06 20:12:00.642327 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-06 20:12:00.642333 | orchestrator | Sunday 06 July 2025 20:05:11 +0000 (0:00:01.286) 0:03:57.693 *********** 2025-07-06 20:12:00.642338 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642344 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642349 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642354 | orchestrator | 2025-07-06 20:12:00.642360 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-06 20:12:00.642365 | orchestrator | Sunday 06 July 2025 20:05:12 +0000 (0:00:00.837) 0:03:58.530 *********** 2025-07-06 20:12:00.642371 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642388 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642393 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642399 | orchestrator | 2025-07-06 20:12:00.642405 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-06 20:12:00.642410 | orchestrator | Sunday 06 July 2025 20:05:12 +0000 (0:00:00.584) 0:03:59.115 *********** 2025-07-06 20:12:00.642415 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642421 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642426 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642432 | orchestrator | 2025-07-06 20:12:00.642437 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-06 20:12:00.642442 | orchestrator | Sunday 06 July 2025 20:05:13 +0000 (0:00:00.658) 0:03:59.773 *********** 2025-07-06 20:12:00.642448 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642453 | orchestrator | 2025-07-06 20:12:00.642459 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-06 20:12:00.642467 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:01.171) 0:04:00.945 *********** 2025-07-06 20:12:00.642473 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642478 | orchestrator | 2025-07-06 20:12:00.642484 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-06 20:12:00.642489 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.606) 0:04:01.552 *********** 2025-07-06 20:12:00.642494 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:12:00.642500 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.642505 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.642510 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:12:00.642516 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-06 20:12:00.642521 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:12:00.642527 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:12:00.642532 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-06 20:12:00.642538 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-06 20:12:00.642543 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-06 20:12:00.642548 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:12:00.642554 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-06 20:12:00.642559 | orchestrator | 2025-07-06 20:12:00.642565 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-06 20:12:00.642570 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:03.190) 0:04:04.743 *********** 2025-07-06 20:12:00.642580 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642585 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642591 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642596 | orchestrator | 2025-07-06 20:12:00.642602 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-06 20:12:00.642607 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:01.731) 0:04:06.474 *********** 2025-07-06 20:12:00.642613 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642618 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642623 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642629 | orchestrator | 2025-07-06 20:12:00.642634 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-06 20:12:00.642640 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.332) 0:04:06.807 *********** 2025-07-06 20:12:00.642645 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.642651 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.642656 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.642661 | orchestrator | 2025-07-06 20:12:00.642667 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-06 20:12:00.642672 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.340) 0:04:07.147 *********** 2025-07-06 20:12:00.642678 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642683 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642689 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642694 | orchestrator | 2025-07-06 20:12:00.642716 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-06 20:12:00.642722 | orchestrator | Sunday 06 July 2025 20:05:23 +0000 (0:00:02.146) 0:04:09.294 *********** 2025-07-06 20:12:00.642728 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642733 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642739 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642744 | orchestrator | 2025-07-06 20:12:00.642750 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-06 20:12:00.642755 | orchestrator | Sunday 06 July 2025 20:05:24 +0000 (0:00:01.609) 0:04:10.904 *********** 2025-07-06 20:12:00.642761 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.642766 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.642771 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.642777 | orchestrator | 2025-07-06 20:12:00.642782 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-06 20:12:00.642788 | orchestrator | Sunday 06 July 2025 20:05:25 +0000 (0:00:00.380) 0:04:11.284 *********** 2025-07-06 20:12:00.642793 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.642798 | orchestrator | 2025-07-06 20:12:00.642804 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-06 20:12:00.642809 | orchestrator | Sunday 06 July 2025 20:05:25 +0000 (0:00:00.548) 0:04:11.833 *********** 2025-07-06 20:12:00.642815 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.642820 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.642825 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.642831 | orchestrator | 2025-07-06 20:12:00.642836 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-06 20:12:00.642842 | orchestrator | Sunday 06 July 2025 20:05:26 +0000 (0:00:00.504) 0:04:12.337 *********** 2025-07-06 20:12:00.642847 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.642853 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.642858 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.642863 | orchestrator | 2025-07-06 20:12:00.642869 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-06 20:12:00.642874 | orchestrator | Sunday 06 July 2025 20:05:26 +0000 (0:00:00.320) 0:04:12.657 *********** 2025-07-06 20:12:00.642880 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.642889 | orchestrator | 2025-07-06 20:12:00.642895 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-06 20:12:00.642900 | orchestrator | Sunday 06 July 2025 20:05:26 +0000 (0:00:00.502) 0:04:13.159 *********** 2025-07-06 20:12:00.642906 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642911 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642917 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642922 | orchestrator | 2025-07-06 20:12:00.642931 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-06 20:12:00.642936 | orchestrator | Sunday 06 July 2025 20:05:29 +0000 (0:00:02.071) 0:04:15.231 *********** 2025-07-06 20:12:00.642942 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642947 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642952 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642958 | orchestrator | 2025-07-06 20:12:00.642963 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-06 20:12:00.642969 | orchestrator | Sunday 06 July 2025 20:05:30 +0000 (0:00:01.158) 0:04:16.389 *********** 2025-07-06 20:12:00.642974 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.642979 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.642985 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.642990 | orchestrator | 2025-07-06 20:12:00.642996 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-06 20:12:00.643001 | orchestrator | Sunday 06 July 2025 20:05:32 +0000 (0:00:01.801) 0:04:18.191 *********** 2025-07-06 20:12:00.643007 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.643012 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.643017 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.643023 | orchestrator | 2025-07-06 20:12:00.643028 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-06 20:12:00.643033 | orchestrator | Sunday 06 July 2025 20:05:33 +0000 (0:00:01.948) 0:04:20.140 *********** 2025-07-06 20:12:00.643039 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.643044 | orchestrator | 2025-07-06 20:12:00.643050 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-06 20:12:00.643055 | orchestrator | Sunday 06 July 2025 20:05:34 +0000 (0:00:00.833) 0:04:20.974 *********** 2025-07-06 20:12:00.643061 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643066 | orchestrator | 2025-07-06 20:12:00.643071 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-06 20:12:00.643077 | orchestrator | Sunday 06 July 2025 20:05:35 +0000 (0:00:01.185) 0:04:22.160 *********** 2025-07-06 20:12:00.643082 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643088 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643093 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643098 | orchestrator | 2025-07-06 20:12:00.643104 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-06 20:12:00.643109 | orchestrator | Sunday 06 July 2025 20:05:45 +0000 (0:00:09.274) 0:04:31.434 *********** 2025-07-06 20:12:00.643115 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643120 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643126 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643131 | orchestrator | 2025-07-06 20:12:00.643137 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-06 20:12:00.643142 | orchestrator | Sunday 06 July 2025 20:05:45 +0000 (0:00:00.453) 0:04:31.888 *********** 2025-07-06 20:12:00.643163 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-06 20:12:00.643175 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-06 20:12:00.643183 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-06 20:12:00.643189 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-06 20:12:00.643195 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-06 20:12:00.643205 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__11071c103a20cd6dbedbaf15dc587f1f2b2d606d'}])  2025-07-06 20:12:00.643211 | orchestrator | 2025-07-06 20:12:00.643217 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:12:00.643222 | orchestrator | Sunday 06 July 2025 20:06:00 +0000 (0:00:14.753) 0:04:46.642 *********** 2025-07-06 20:12:00.643228 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643233 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643238 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643244 | orchestrator | 2025-07-06 20:12:00.643249 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-06 20:12:00.643255 | orchestrator | Sunday 06 July 2025 20:06:00 +0000 (0:00:00.295) 0:04:46.937 *********** 2025-07-06 20:12:00.643260 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.643266 | orchestrator | 2025-07-06 20:12:00.643271 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-06 20:12:00.643277 | orchestrator | Sunday 06 July 2025 20:06:01 +0000 (0:00:00.632) 0:04:47.570 *********** 2025-07-06 20:12:00.643282 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643287 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643293 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643298 | orchestrator | 2025-07-06 20:12:00.643304 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-06 20:12:00.643309 | orchestrator | Sunday 06 July 2025 20:06:01 +0000 (0:00:00.329) 0:04:47.899 *********** 2025-07-06 20:12:00.643315 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643320 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643325 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643331 | orchestrator | 2025-07-06 20:12:00.643336 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-06 20:12:00.643342 | orchestrator | Sunday 06 July 2025 20:06:02 +0000 (0:00:00.307) 0:04:48.207 *********** 2025-07-06 20:12:00.643352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:12:00.643358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:12:00.643363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:12:00.643369 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643374 | orchestrator | 2025-07-06 20:12:00.643392 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-06 20:12:00.643398 | orchestrator | Sunday 06 July 2025 20:06:02 +0000 (0:00:00.676) 0:04:48.883 *********** 2025-07-06 20:12:00.643403 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643409 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643414 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643420 | orchestrator | 2025-07-06 20:12:00.643425 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-06 20:12:00.643431 | orchestrator | 2025-07-06 20:12:00.643436 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.643458 | orchestrator | Sunday 06 July 2025 20:06:03 +0000 (0:00:00.685) 0:04:49.569 *********** 2025-07-06 20:12:00.643464 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.643470 | orchestrator | 2025-07-06 20:12:00.643475 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.643481 | orchestrator | Sunday 06 July 2025 20:06:03 +0000 (0:00:00.439) 0:04:50.009 *********** 2025-07-06 20:12:00.643486 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.643492 | orchestrator | 2025-07-06 20:12:00.643497 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.643502 | orchestrator | Sunday 06 July 2025 20:06:04 +0000 (0:00:00.516) 0:04:50.525 *********** 2025-07-06 20:12:00.643508 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643513 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643519 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643524 | orchestrator | 2025-07-06 20:12:00.643530 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.643535 | orchestrator | Sunday 06 July 2025 20:06:05 +0000 (0:00:00.746) 0:04:51.272 *********** 2025-07-06 20:12:00.643540 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643546 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643551 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643557 | orchestrator | 2025-07-06 20:12:00.643562 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.643568 | orchestrator | Sunday 06 July 2025 20:06:05 +0000 (0:00:00.320) 0:04:51.592 *********** 2025-07-06 20:12:00.643573 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643578 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643584 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643589 | orchestrator | 2025-07-06 20:12:00.643594 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.643600 | orchestrator | Sunday 06 July 2025 20:06:05 +0000 (0:00:00.422) 0:04:52.014 *********** 2025-07-06 20:12:00.643605 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643611 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643616 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643621 | orchestrator | 2025-07-06 20:12:00.643627 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.643632 | orchestrator | Sunday 06 July 2025 20:06:06 +0000 (0:00:00.305) 0:04:52.320 *********** 2025-07-06 20:12:00.643638 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643643 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643649 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643654 | orchestrator | 2025-07-06 20:12:00.643660 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.643673 | orchestrator | Sunday 06 July 2025 20:06:06 +0000 (0:00:00.753) 0:04:53.073 *********** 2025-07-06 20:12:00.643678 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643684 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643689 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643695 | orchestrator | 2025-07-06 20:12:00.643700 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.643706 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:00.311) 0:04:53.384 *********** 2025-07-06 20:12:00.643711 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643716 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643722 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643727 | orchestrator | 2025-07-06 20:12:00.643733 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.643738 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:00.532) 0:04:53.917 *********** 2025-07-06 20:12:00.643743 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643749 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643754 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643760 | orchestrator | 2025-07-06 20:12:00.643765 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.643771 | orchestrator | Sunday 06 July 2025 20:06:08 +0000 (0:00:00.792) 0:04:54.710 *********** 2025-07-06 20:12:00.643776 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643781 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643787 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643792 | orchestrator | 2025-07-06 20:12:00.643798 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.643803 | orchestrator | Sunday 06 July 2025 20:06:09 +0000 (0:00:00.730) 0:04:55.441 *********** 2025-07-06 20:12:00.643809 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643814 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643820 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643825 | orchestrator | 2025-07-06 20:12:00.643830 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.643836 | orchestrator | Sunday 06 July 2025 20:06:09 +0000 (0:00:00.270) 0:04:55.711 *********** 2025-07-06 20:12:00.643841 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.643847 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.643852 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.643857 | orchestrator | 2025-07-06 20:12:00.643863 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.643869 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:00.537) 0:04:56.248 *********** 2025-07-06 20:12:00.643874 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643879 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643885 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643890 | orchestrator | 2025-07-06 20:12:00.643896 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.643901 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:00.287) 0:04:56.536 *********** 2025-07-06 20:12:00.643906 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643912 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643917 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643923 | orchestrator | 2025-07-06 20:12:00.643928 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.643948 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:00.282) 0:04:56.819 *********** 2025-07-06 20:12:00.643955 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643960 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.643966 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.643971 | orchestrator | 2025-07-06 20:12:00.643977 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.643982 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:00.304) 0:04:57.124 *********** 2025-07-06 20:12:00.643991 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.643996 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644002 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644007 | orchestrator | 2025-07-06 20:12:00.644013 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.644018 | orchestrator | Sunday 06 July 2025 20:06:11 +0000 (0:00:00.523) 0:04:57.647 *********** 2025-07-06 20:12:00.644023 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644029 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644034 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644040 | orchestrator | 2025-07-06 20:12:00.644045 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.644050 | orchestrator | Sunday 06 July 2025 20:06:11 +0000 (0:00:00.311) 0:04:57.958 *********** 2025-07-06 20:12:00.644056 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.644061 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.644067 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644072 | orchestrator | 2025-07-06 20:12:00.644078 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.644083 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.377) 0:04:58.336 *********** 2025-07-06 20:12:00.644088 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.644094 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.644099 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644104 | orchestrator | 2025-07-06 20:12:00.644110 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.644115 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.299) 0:04:58.635 *********** 2025-07-06 20:12:00.644121 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.644126 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.644132 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644137 | orchestrator | 2025-07-06 20:12:00.644142 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-06 20:12:00.644148 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:00.654) 0:04:59.289 *********** 2025-07-06 20:12:00.644154 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:12:00.644159 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:12:00.644165 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:12:00.644170 | orchestrator | 2025-07-06 20:12:00.644178 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-06 20:12:00.644183 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:00.537) 0:04:59.827 *********** 2025-07-06 20:12:00.644189 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.644194 | orchestrator | 2025-07-06 20:12:00.644200 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-06 20:12:00.644205 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.443) 0:05:00.271 *********** 2025-07-06 20:12:00.644211 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.644216 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.644221 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.644227 | orchestrator | 2025-07-06 20:12:00.644232 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-06 20:12:00.644238 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.883) 0:05:01.154 *********** 2025-07-06 20:12:00.644243 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644248 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644254 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644259 | orchestrator | 2025-07-06 20:12:00.644265 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-06 20:12:00.644270 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:00.309) 0:05:01.463 *********** 2025-07-06 20:12:00.644276 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:12:00.644285 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:12:00.644291 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:12:00.644296 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-06 20:12:00.644302 | orchestrator | 2025-07-06 20:12:00.644307 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-06 20:12:00.644313 | orchestrator | Sunday 06 July 2025 20:06:25 +0000 (0:00:10.547) 0:05:12.011 *********** 2025-07-06 20:12:00.644318 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.644323 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.644329 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644334 | orchestrator | 2025-07-06 20:12:00.644340 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-06 20:12:00.644345 | orchestrator | Sunday 06 July 2025 20:06:26 +0000 (0:00:00.320) 0:05:12.331 *********** 2025-07-06 20:12:00.644351 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-06 20:12:00.644356 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:12:00.644362 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:12:00.644367 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-06 20:12:00.644372 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.644389 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.644395 | orchestrator | 2025-07-06 20:12:00.644400 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:12:00.644406 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:02.417) 0:05:14.749 *********** 2025-07-06 20:12:00.644428 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-06 20:12:00.644434 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:12:00.644440 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:12:00.644445 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:12:00.644451 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-06 20:12:00.644456 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-06 20:12:00.644462 | orchestrator | 2025-07-06 20:12:00.644467 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-06 20:12:00.644473 | orchestrator | Sunday 06 July 2025 20:06:29 +0000 (0:00:01.326) 0:05:16.076 *********** 2025-07-06 20:12:00.644478 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.644483 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.644489 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644494 | orchestrator | 2025-07-06 20:12:00.644500 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-06 20:12:00.644505 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:00.742) 0:05:16.818 *********** 2025-07-06 20:12:00.644511 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644516 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644521 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644527 | orchestrator | 2025-07-06 20:12:00.644532 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-06 20:12:00.644538 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:00.236) 0:05:17.055 *********** 2025-07-06 20:12:00.644543 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644548 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644554 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644559 | orchestrator | 2025-07-06 20:12:00.644565 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-06 20:12:00.644570 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.260) 0:05:17.315 *********** 2025-07-06 20:12:00.644576 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.644581 | orchestrator | 2025-07-06 20:12:00.644587 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-06 20:12:00.644598 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.601) 0:05:17.917 *********** 2025-07-06 20:12:00.644603 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644609 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644614 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644619 | orchestrator | 2025-07-06 20:12:00.644625 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-06 20:12:00.644630 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.257) 0:05:18.174 *********** 2025-07-06 20:12:00.644636 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644641 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644647 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.644652 | orchestrator | 2025-07-06 20:12:00.644661 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-06 20:12:00.644667 | orchestrator | Sunday 06 July 2025 20:06:32 +0000 (0:00:00.253) 0:05:18.428 *********** 2025-07-06 20:12:00.644672 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.644678 | orchestrator | 2025-07-06 20:12:00.644683 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-06 20:12:00.644688 | orchestrator | Sunday 06 July 2025 20:06:32 +0000 (0:00:00.592) 0:05:19.020 *********** 2025-07-06 20:12:00.644694 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.644699 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.644705 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.644710 | orchestrator | 2025-07-06 20:12:00.644715 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-06 20:12:00.644721 | orchestrator | Sunday 06 July 2025 20:06:33 +0000 (0:00:01.137) 0:05:20.157 *********** 2025-07-06 20:12:00.644726 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.644732 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.644737 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.644743 | orchestrator | 2025-07-06 20:12:00.644748 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-06 20:12:00.644753 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:01.104) 0:05:21.261 *********** 2025-07-06 20:12:00.644759 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.644764 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.644770 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.644775 | orchestrator | 2025-07-06 20:12:00.644780 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-06 20:12:00.644786 | orchestrator | Sunday 06 July 2025 20:06:37 +0000 (0:00:02.064) 0:05:23.326 *********** 2025-07-06 20:12:00.644791 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.644797 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.644802 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.644808 | orchestrator | 2025-07-06 20:12:00.644813 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-06 20:12:00.644819 | orchestrator | Sunday 06 July 2025 20:06:39 +0000 (0:00:01.957) 0:05:25.283 *********** 2025-07-06 20:12:00.644824 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.644829 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.644835 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-06 20:12:00.644840 | orchestrator | 2025-07-06 20:12:00.644846 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-06 20:12:00.644851 | orchestrator | Sunday 06 July 2025 20:06:39 +0000 (0:00:00.328) 0:05:25.611 *********** 2025-07-06 20:12:00.644857 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-06 20:12:00.644862 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-06 20:12:00.644883 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-06 20:12:00.644894 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-06 20:12:00.644900 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-06 20:12:00.644905 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.644911 | orchestrator | 2025-07-06 20:12:00.644916 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-06 20:12:00.644921 | orchestrator | Sunday 06 July 2025 20:07:09 +0000 (0:00:29.914) 0:05:55.525 *********** 2025-07-06 20:12:00.644927 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.644932 | orchestrator | 2025-07-06 20:12:00.644938 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-06 20:12:00.644943 | orchestrator | Sunday 06 July 2025 20:07:10 +0000 (0:00:01.583) 0:05:57.109 *********** 2025-07-06 20:12:00.644949 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644954 | orchestrator | 2025-07-06 20:12:00.644959 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-06 20:12:00.644965 | orchestrator | Sunday 06 July 2025 20:07:11 +0000 (0:00:00.849) 0:05:57.959 *********** 2025-07-06 20:12:00.644970 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.644976 | orchestrator | 2025-07-06 20:12:00.644981 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-06 20:12:00.644986 | orchestrator | Sunday 06 July 2025 20:07:11 +0000 (0:00:00.167) 0:05:58.126 *********** 2025-07-06 20:12:00.644992 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-06 20:12:00.644997 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-06 20:12:00.645003 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-06 20:12:00.645008 | orchestrator | 2025-07-06 20:12:00.645013 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-06 20:12:00.645019 | orchestrator | Sunday 06 July 2025 20:07:18 +0000 (0:00:06.116) 0:06:04.243 *********** 2025-07-06 20:12:00.645024 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-06 20:12:00.645030 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-06 20:12:00.645035 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-06 20:12:00.645041 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-06 20:12:00.645046 | orchestrator | 2025-07-06 20:12:00.645051 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:12:00.645060 | orchestrator | Sunday 06 July 2025 20:07:22 +0000 (0:00:04.848) 0:06:09.091 *********** 2025-07-06 20:12:00.645065 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.645071 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.645076 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.645081 | orchestrator | 2025-07-06 20:12:00.645087 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-06 20:12:00.645092 | orchestrator | Sunday 06 July 2025 20:07:23 +0000 (0:00:00.986) 0:06:10.078 *********** 2025-07-06 20:12:00.645098 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.645103 | orchestrator | 2025-07-06 20:12:00.645109 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-06 20:12:00.645114 | orchestrator | Sunday 06 July 2025 20:07:24 +0000 (0:00:00.597) 0:06:10.676 *********** 2025-07-06 20:12:00.645119 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.645125 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.645130 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.645136 | orchestrator | 2025-07-06 20:12:00.645141 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-06 20:12:00.645151 | orchestrator | Sunday 06 July 2025 20:07:24 +0000 (0:00:00.319) 0:06:10.996 *********** 2025-07-06 20:12:00.645156 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.645162 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.645167 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.645173 | orchestrator | 2025-07-06 20:12:00.645178 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-06 20:12:00.645183 | orchestrator | Sunday 06 July 2025 20:07:26 +0000 (0:00:01.833) 0:06:12.829 *********** 2025-07-06 20:12:00.645189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:12:00.645194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:12:00.645200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:12:00.645205 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.645210 | orchestrator | 2025-07-06 20:12:00.645216 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-06 20:12:00.645221 | orchestrator | Sunday 06 July 2025 20:07:27 +0000 (0:00:00.671) 0:06:13.501 *********** 2025-07-06 20:12:00.645227 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.645232 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.645238 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.645243 | orchestrator | 2025-07-06 20:12:00.645249 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-06 20:12:00.645254 | orchestrator | 2025-07-06 20:12:00.645260 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.645265 | orchestrator | Sunday 06 July 2025 20:07:28 +0000 (0:00:00.695) 0:06:14.196 *********** 2025-07-06 20:12:00.645270 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.645276 | orchestrator | 2025-07-06 20:12:00.645281 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.645303 | orchestrator | Sunday 06 July 2025 20:07:28 +0000 (0:00:00.801) 0:06:14.997 *********** 2025-07-06 20:12:00.645309 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.645315 | orchestrator | 2025-07-06 20:12:00.645321 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.645326 | orchestrator | Sunday 06 July 2025 20:07:29 +0000 (0:00:00.510) 0:06:15.507 *********** 2025-07-06 20:12:00.645331 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645337 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645342 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645348 | orchestrator | 2025-07-06 20:12:00.645353 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.645359 | orchestrator | Sunday 06 July 2025 20:07:29 +0000 (0:00:00.302) 0:06:15.810 *********** 2025-07-06 20:12:00.645364 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645369 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645405 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645412 | orchestrator | 2025-07-06 20:12:00.645418 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.645423 | orchestrator | Sunday 06 July 2025 20:07:30 +0000 (0:00:01.001) 0:06:16.812 *********** 2025-07-06 20:12:00.645429 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645434 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645440 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645445 | orchestrator | 2025-07-06 20:12:00.645451 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.645456 | orchestrator | Sunday 06 July 2025 20:07:31 +0000 (0:00:00.715) 0:06:17.527 *********** 2025-07-06 20:12:00.645462 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645467 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645472 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645478 | orchestrator | 2025-07-06 20:12:00.645488 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.645494 | orchestrator | Sunday 06 July 2025 20:07:32 +0000 (0:00:00.684) 0:06:18.212 *********** 2025-07-06 20:12:00.645499 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645505 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645510 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645515 | orchestrator | 2025-07-06 20:12:00.645521 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.645526 | orchestrator | Sunday 06 July 2025 20:07:32 +0000 (0:00:00.336) 0:06:18.549 *********** 2025-07-06 20:12:00.645532 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645537 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645542 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645548 | orchestrator | 2025-07-06 20:12:00.645553 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.645559 | orchestrator | Sunday 06 July 2025 20:07:32 +0000 (0:00:00.562) 0:06:19.111 *********** 2025-07-06 20:12:00.645567 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645573 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645578 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645584 | orchestrator | 2025-07-06 20:12:00.645589 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.645595 | orchestrator | Sunday 06 July 2025 20:07:33 +0000 (0:00:00.336) 0:06:19.448 *********** 2025-07-06 20:12:00.645600 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645605 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645611 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645616 | orchestrator | 2025-07-06 20:12:00.645622 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.645627 | orchestrator | Sunday 06 July 2025 20:07:33 +0000 (0:00:00.712) 0:06:20.160 *********** 2025-07-06 20:12:00.645633 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645638 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645643 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645648 | orchestrator | 2025-07-06 20:12:00.645653 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.645658 | orchestrator | Sunday 06 July 2025 20:07:34 +0000 (0:00:00.727) 0:06:20.888 *********** 2025-07-06 20:12:00.645663 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645667 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645672 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645677 | orchestrator | 2025-07-06 20:12:00.645682 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.645687 | orchestrator | Sunday 06 July 2025 20:07:35 +0000 (0:00:00.593) 0:06:21.481 *********** 2025-07-06 20:12:00.645691 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645696 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645701 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645706 | orchestrator | 2025-07-06 20:12:00.645711 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.645715 | orchestrator | Sunday 06 July 2025 20:07:35 +0000 (0:00:00.326) 0:06:21.808 *********** 2025-07-06 20:12:00.645720 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645725 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645730 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645734 | orchestrator | 2025-07-06 20:12:00.645739 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.645744 | orchestrator | Sunday 06 July 2025 20:07:35 +0000 (0:00:00.360) 0:06:22.168 *********** 2025-07-06 20:12:00.645749 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645754 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645759 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645763 | orchestrator | 2025-07-06 20:12:00.645768 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.645777 | orchestrator | Sunday 06 July 2025 20:07:36 +0000 (0:00:00.339) 0:06:22.507 *********** 2025-07-06 20:12:00.645782 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645786 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645791 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645796 | orchestrator | 2025-07-06 20:12:00.645801 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.645806 | orchestrator | Sunday 06 July 2025 20:07:37 +0000 (0:00:00.811) 0:06:23.319 *********** 2025-07-06 20:12:00.645813 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645818 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645823 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645828 | orchestrator | 2025-07-06 20:12:00.645833 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.645838 | orchestrator | Sunday 06 July 2025 20:07:37 +0000 (0:00:00.340) 0:06:23.659 *********** 2025-07-06 20:12:00.645843 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645848 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645852 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645857 | orchestrator | 2025-07-06 20:12:00.645862 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.645867 | orchestrator | Sunday 06 July 2025 20:07:37 +0000 (0:00:00.298) 0:06:23.958 *********** 2025-07-06 20:12:00.645872 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.645876 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.645881 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.645886 | orchestrator | 2025-07-06 20:12:00.645891 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.645896 | orchestrator | Sunday 06 July 2025 20:07:38 +0000 (0:00:00.386) 0:06:24.344 *********** 2025-07-06 20:12:00.645901 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645905 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645910 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645915 | orchestrator | 2025-07-06 20:12:00.645920 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.645925 | orchestrator | Sunday 06 July 2025 20:07:38 +0000 (0:00:00.766) 0:06:25.110 *********** 2025-07-06 20:12:00.645930 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645934 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645939 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645944 | orchestrator | 2025-07-06 20:12:00.645949 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-06 20:12:00.645954 | orchestrator | Sunday 06 July 2025 20:07:39 +0000 (0:00:00.757) 0:06:25.868 *********** 2025-07-06 20:12:00.645959 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.645963 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.645968 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.645973 | orchestrator | 2025-07-06 20:12:00.645978 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-06 20:12:00.645983 | orchestrator | Sunday 06 July 2025 20:07:40 +0000 (0:00:00.371) 0:06:26.239 *********** 2025-07-06 20:12:00.645988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:12:00.645992 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:12:00.645997 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:12:00.646002 | orchestrator | 2025-07-06 20:12:00.646007 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-06 20:12:00.646030 | orchestrator | Sunday 06 July 2025 20:07:41 +0000 (0:00:01.107) 0:06:27.346 *********** 2025-07-06 20:12:00.646037 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.646042 | orchestrator | 2025-07-06 20:12:00.646047 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-06 20:12:00.646056 | orchestrator | Sunday 06 July 2025 20:07:41 +0000 (0:00:00.798) 0:06:28.144 *********** 2025-07-06 20:12:00.646060 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646065 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.646070 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.646075 | orchestrator | 2025-07-06 20:12:00.646080 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-06 20:12:00.646085 | orchestrator | Sunday 06 July 2025 20:07:42 +0000 (0:00:00.302) 0:06:28.447 *********** 2025-07-06 20:12:00.646090 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646094 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.646099 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.646104 | orchestrator | 2025-07-06 20:12:00.646109 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-06 20:12:00.646114 | orchestrator | Sunday 06 July 2025 20:07:42 +0000 (0:00:00.277) 0:06:28.724 *********** 2025-07-06 20:12:00.646119 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.646123 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.646128 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.646133 | orchestrator | 2025-07-06 20:12:00.646138 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-06 20:12:00.646143 | orchestrator | Sunday 06 July 2025 20:07:43 +0000 (0:00:00.884) 0:06:29.609 *********** 2025-07-06 20:12:00.646147 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.646152 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.646157 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.646162 | orchestrator | 2025-07-06 20:12:00.646167 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-06 20:12:00.646172 | orchestrator | Sunday 06 July 2025 20:07:43 +0000 (0:00:00.324) 0:06:29.933 *********** 2025-07-06 20:12:00.646176 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-06 20:12:00.646181 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-06 20:12:00.646186 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-06 20:12:00.646191 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-06 20:12:00.646196 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-06 20:12:00.646200 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-06 20:12:00.646205 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-06 20:12:00.646215 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-06 20:12:00.646220 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-06 20:12:00.646225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-06 20:12:00.646229 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-06 20:12:00.646234 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-06 20:12:00.646239 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-06 20:12:00.646244 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-06 20:12:00.646249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-06 20:12:00.646254 | orchestrator | 2025-07-06 20:12:00.646258 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-06 20:12:00.646263 | orchestrator | Sunday 06 July 2025 20:07:47 +0000 (0:00:04.089) 0:06:34.022 *********** 2025-07-06 20:12:00.646268 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646273 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.646282 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.646287 | orchestrator | 2025-07-06 20:12:00.646292 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-06 20:12:00.646297 | orchestrator | Sunday 06 July 2025 20:07:48 +0000 (0:00:00.288) 0:06:34.311 *********** 2025-07-06 20:12:00.646301 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.646306 | orchestrator | 2025-07-06 20:12:00.646311 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-06 20:12:00.646316 | orchestrator | Sunday 06 July 2025 20:07:48 +0000 (0:00:00.750) 0:06:35.062 *********** 2025-07-06 20:12:00.646321 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-06 20:12:00.646326 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-06 20:12:00.646330 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-06 20:12:00.646335 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-06 20:12:00.646340 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-06 20:12:00.646345 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-06 20:12:00.646350 | orchestrator | 2025-07-06 20:12:00.646355 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-06 20:12:00.646359 | orchestrator | Sunday 06 July 2025 20:07:49 +0000 (0:00:01.004) 0:06:36.067 *********** 2025-07-06 20:12:00.646367 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.646372 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:12:00.646412 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:12:00.646417 | orchestrator | 2025-07-06 20:12:00.646422 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:12:00.646427 | orchestrator | Sunday 06 July 2025 20:07:52 +0000 (0:00:02.174) 0:06:38.241 *********** 2025-07-06 20:12:00.646432 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:12:00.646437 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:12:00.646442 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:12:00.646447 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.646452 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-06 20:12:00.646456 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.646461 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:12:00.646466 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-06 20:12:00.646471 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.646476 | orchestrator | 2025-07-06 20:12:00.646480 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-06 20:12:00.646485 | orchestrator | Sunday 06 July 2025 20:07:53 +0000 (0:00:01.445) 0:06:39.687 *********** 2025-07-06 20:12:00.646490 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.646495 | orchestrator | 2025-07-06 20:12:00.646500 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-06 20:12:00.646505 | orchestrator | Sunday 06 July 2025 20:07:55 +0000 (0:00:02.152) 0:06:41.840 *********** 2025-07-06 20:12:00.646509 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.646514 | orchestrator | 2025-07-06 20:12:00.646519 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-06 20:12:00.646524 | orchestrator | Sunday 06 July 2025 20:07:56 +0000 (0:00:00.559) 0:06:42.399 *********** 2025-07-06 20:12:00.646529 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5b3ebdad-89cb-5093-adb4-41e3a34848e3', 'data_vg': 'ceph-5b3ebdad-89cb-5093-adb4-41e3a34848e3'}) 2025-07-06 20:12:00.646534 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6b2ac7c1-b26c-557b-8077-56c3cb59db23', 'data_vg': 'ceph-6b2ac7c1-b26c-557b-8077-56c3cb59db23'}) 2025-07-06 20:12:00.646544 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4472ae94-c442-5fee-95ac-d2e3b3e55ca4', 'data_vg': 'ceph-4472ae94-c442-5fee-95ac-d2e3b3e55ca4'}) 2025-07-06 20:12:00.646549 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-67620618-3322-5703-9264-076cb24f91fa', 'data_vg': 'ceph-67620618-3322-5703-9264-076cb24f91fa'}) 2025-07-06 20:12:00.646557 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d', 'data_vg': 'ceph-e81f0ba1-e76a-5ac2-85fd-9d5b359e204d'}) 2025-07-06 20:12:00.646562 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c6cf71a-fa39-576b-8a24-237c163534df', 'data_vg': 'ceph-8c6cf71a-fa39-576b-8a24-237c163534df'}) 2025-07-06 20:12:00.646567 | orchestrator | 2025-07-06 20:12:00.646572 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-06 20:12:00.646577 | orchestrator | Sunday 06 July 2025 20:08:37 +0000 (0:00:41.636) 0:07:24.036 *********** 2025-07-06 20:12:00.646581 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646586 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.646591 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.646596 | orchestrator | 2025-07-06 20:12:00.646601 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-06 20:12:00.646606 | orchestrator | Sunday 06 July 2025 20:08:38 +0000 (0:00:00.678) 0:07:24.714 *********** 2025-07-06 20:12:00.646611 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.646616 | orchestrator | 2025-07-06 20:12:00.646621 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-06 20:12:00.646625 | orchestrator | Sunday 06 July 2025 20:08:39 +0000 (0:00:00.566) 0:07:25.280 *********** 2025-07-06 20:12:00.646630 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.646635 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.646640 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.646645 | orchestrator | 2025-07-06 20:12:00.646650 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-06 20:12:00.646654 | orchestrator | Sunday 06 July 2025 20:08:39 +0000 (0:00:00.654) 0:07:25.935 *********** 2025-07-06 20:12:00.646659 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.646664 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.646669 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.646674 | orchestrator | 2025-07-06 20:12:00.646679 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-06 20:12:00.646684 | orchestrator | Sunday 06 July 2025 20:08:42 +0000 (0:00:03.022) 0:07:28.958 *********** 2025-07-06 20:12:00.646688 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.646693 | orchestrator | 2025-07-06 20:12:00.646698 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-06 20:12:00.646703 | orchestrator | Sunday 06 July 2025 20:08:43 +0000 (0:00:00.567) 0:07:29.526 *********** 2025-07-06 20:12:00.646708 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.646713 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.646717 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.646722 | orchestrator | 2025-07-06 20:12:00.646730 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-06 20:12:00.646735 | orchestrator | Sunday 06 July 2025 20:08:44 +0000 (0:00:01.215) 0:07:30.741 *********** 2025-07-06 20:12:00.646740 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.646745 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.646749 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.646754 | orchestrator | 2025-07-06 20:12:00.646759 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-06 20:12:00.646764 | orchestrator | Sunday 06 July 2025 20:08:45 +0000 (0:00:01.424) 0:07:32.166 *********** 2025-07-06 20:12:00.646772 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.646777 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.646782 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.646787 | orchestrator | 2025-07-06 20:12:00.646792 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-06 20:12:00.646797 | orchestrator | Sunday 06 July 2025 20:08:47 +0000 (0:00:01.764) 0:07:33.930 *********** 2025-07-06 20:12:00.646802 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646806 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.646811 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.646816 | orchestrator | 2025-07-06 20:12:00.646821 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-06 20:12:00.646826 | orchestrator | Sunday 06 July 2025 20:08:48 +0000 (0:00:00.357) 0:07:34.287 *********** 2025-07-06 20:12:00.646831 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646835 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.646840 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.646845 | orchestrator | 2025-07-06 20:12:00.646850 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-06 20:12:00.646855 | orchestrator | Sunday 06 July 2025 20:08:48 +0000 (0:00:00.317) 0:07:34.605 *********** 2025-07-06 20:12:00.646860 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-07-06 20:12:00.646864 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:12:00.646869 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-07-06 20:12:00.646874 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-07-06 20:12:00.646879 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-06 20:12:00.646884 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-07-06 20:12:00.646889 | orchestrator | 2025-07-06 20:12:00.646893 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-06 20:12:00.646898 | orchestrator | Sunday 06 July 2025 20:08:49 +0000 (0:00:01.371) 0:07:35.977 *********** 2025-07-06 20:12:00.646903 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-06 20:12:00.646908 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-07-06 20:12:00.646913 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-07-06 20:12:00.646918 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-07-06 20:12:00.646922 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-06 20:12:00.646927 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-06 20:12:00.646932 | orchestrator | 2025-07-06 20:12:00.646937 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-06 20:12:00.646942 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:02.094) 0:07:38.072 *********** 2025-07-06 20:12:00.646949 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-06 20:12:00.646954 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-07-06 20:12:00.646959 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-07-06 20:12:00.646964 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-07-06 20:12:00.646969 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-06 20:12:00.646973 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-06 20:12:00.646978 | orchestrator | 2025-07-06 20:12:00.646983 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-06 20:12:00.646988 | orchestrator | Sunday 06 July 2025 20:08:55 +0000 (0:00:03.536) 0:07:41.608 *********** 2025-07-06 20:12:00.646993 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.646998 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647002 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.647007 | orchestrator | 2025-07-06 20:12:00.647012 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-06 20:12:00.647017 | orchestrator | Sunday 06 July 2025 20:08:57 +0000 (0:00:02.425) 0:07:44.033 *********** 2025-07-06 20:12:00.647022 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647027 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647037 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-06 20:12:00.647041 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.647046 | orchestrator | 2025-07-06 20:12:00.647051 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-06 20:12:00.647056 | orchestrator | Sunday 06 July 2025 20:09:10 +0000 (0:00:12.946) 0:07:56.979 *********** 2025-07-06 20:12:00.647061 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647066 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647071 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647076 | orchestrator | 2025-07-06 20:12:00.647081 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:12:00.647085 | orchestrator | Sunday 06 July 2025 20:09:11 +0000 (0:00:00.727) 0:07:57.706 *********** 2025-07-06 20:12:00.647090 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647095 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647100 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647105 | orchestrator | 2025-07-06 20:12:00.647110 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-06 20:12:00.647115 | orchestrator | Sunday 06 July 2025 20:09:11 +0000 (0:00:00.467) 0:07:58.174 *********** 2025-07-06 20:12:00.647120 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.647124 | orchestrator | 2025-07-06 20:12:00.647129 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-06 20:12:00.647137 | orchestrator | Sunday 06 July 2025 20:09:12 +0000 (0:00:00.467) 0:07:58.641 *********** 2025-07-06 20:12:00.647142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.647147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.647152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.647157 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647161 | orchestrator | 2025-07-06 20:12:00.647166 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-06 20:12:00.647171 | orchestrator | Sunday 06 July 2025 20:09:12 +0000 (0:00:00.353) 0:07:58.995 *********** 2025-07-06 20:12:00.647176 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647181 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647186 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647190 | orchestrator | 2025-07-06 20:12:00.647195 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-06 20:12:00.647200 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.259) 0:07:59.254 *********** 2025-07-06 20:12:00.647205 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647210 | orchestrator | 2025-07-06 20:12:00.647215 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-06 20:12:00.647219 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.174) 0:07:59.429 *********** 2025-07-06 20:12:00.647224 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647229 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647234 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647239 | orchestrator | 2025-07-06 20:12:00.647244 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-06 20:12:00.647248 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.452) 0:07:59.881 *********** 2025-07-06 20:12:00.647253 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647258 | orchestrator | 2025-07-06 20:12:00.647263 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-06 20:12:00.647268 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.187) 0:08:00.069 *********** 2025-07-06 20:12:00.647273 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647277 | orchestrator | 2025-07-06 20:12:00.647282 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-06 20:12:00.647290 | orchestrator | Sunday 06 July 2025 20:09:14 +0000 (0:00:00.193) 0:08:00.263 *********** 2025-07-06 20:12:00.647295 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647300 | orchestrator | 2025-07-06 20:12:00.647305 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-06 20:12:00.647310 | orchestrator | Sunday 06 July 2025 20:09:14 +0000 (0:00:00.111) 0:08:00.374 *********** 2025-07-06 20:12:00.647315 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647320 | orchestrator | 2025-07-06 20:12:00.647324 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-06 20:12:00.647329 | orchestrator | Sunday 06 July 2025 20:09:14 +0000 (0:00:00.205) 0:08:00.580 *********** 2025-07-06 20:12:00.647334 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647339 | orchestrator | 2025-07-06 20:12:00.647344 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-06 20:12:00.647352 | orchestrator | Sunday 06 July 2025 20:09:14 +0000 (0:00:00.195) 0:08:00.776 *********** 2025-07-06 20:12:00.647357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.647362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.647367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.647371 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647404 | orchestrator | 2025-07-06 20:12:00.647410 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-06 20:12:00.647415 | orchestrator | Sunday 06 July 2025 20:09:14 +0000 (0:00:00.325) 0:08:01.101 *********** 2025-07-06 20:12:00.647419 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647424 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647429 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647434 | orchestrator | 2025-07-06 20:12:00.647439 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-06 20:12:00.647444 | orchestrator | Sunday 06 July 2025 20:09:15 +0000 (0:00:00.294) 0:08:01.395 *********** 2025-07-06 20:12:00.647449 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647453 | orchestrator | 2025-07-06 20:12:00.647458 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-06 20:12:00.647463 | orchestrator | Sunday 06 July 2025 20:09:15 +0000 (0:00:00.648) 0:08:02.043 *********** 2025-07-06 20:12:00.647468 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647473 | orchestrator | 2025-07-06 20:12:00.647478 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-06 20:12:00.647483 | orchestrator | 2025-07-06 20:12:00.647487 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.647492 | orchestrator | Sunday 06 July 2025 20:09:16 +0000 (0:00:00.650) 0:08:02.694 *********** 2025-07-06 20:12:00.647497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.647502 | orchestrator | 2025-07-06 20:12:00.647507 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.647512 | orchestrator | Sunday 06 July 2025 20:09:17 +0000 (0:00:01.195) 0:08:03.890 *********** 2025-07-06 20:12:00.647517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.647522 | orchestrator | 2025-07-06 20:12:00.647527 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.647531 | orchestrator | Sunday 06 July 2025 20:09:18 +0000 (0:00:01.228) 0:08:05.118 *********** 2025-07-06 20:12:00.647536 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647544 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647549 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647554 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.647565 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.647573 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.647582 | orchestrator | 2025-07-06 20:12:00.647588 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.647593 | orchestrator | Sunday 06 July 2025 20:09:20 +0000 (0:00:01.361) 0:08:06.480 *********** 2025-07-06 20:12:00.647597 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.647602 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.647607 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.647612 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.647617 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.647621 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.647626 | orchestrator | 2025-07-06 20:12:00.647631 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.647636 | orchestrator | Sunday 06 July 2025 20:09:21 +0000 (0:00:00.728) 0:08:07.208 *********** 2025-07-06 20:12:00.647641 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.647645 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.647650 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.647655 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.647660 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.647665 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.647669 | orchestrator | 2025-07-06 20:12:00.647674 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.647692 | orchestrator | Sunday 06 July 2025 20:09:21 +0000 (0:00:00.886) 0:08:08.095 *********** 2025-07-06 20:12:00.647696 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.647701 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.647706 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.647711 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.647716 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.647721 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.647726 | orchestrator | 2025-07-06 20:12:00.647730 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.647735 | orchestrator | Sunday 06 July 2025 20:09:22 +0000 (0:00:00.686) 0:08:08.781 *********** 2025-07-06 20:12:00.647740 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647748 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647757 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647762 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.647766 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.647771 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.647775 | orchestrator | 2025-07-06 20:12:00.647780 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.647785 | orchestrator | Sunday 06 July 2025 20:09:23 +0000 (0:00:01.217) 0:08:09.998 *********** 2025-07-06 20:12:00.647789 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647794 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647798 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647803 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.647807 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.647812 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.647816 | orchestrator | 2025-07-06 20:12:00.647821 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.647828 | orchestrator | Sunday 06 July 2025 20:09:24 +0000 (0:00:00.585) 0:08:10.583 *********** 2025-07-06 20:12:00.647835 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647842 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647849 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647854 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.647858 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.647863 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.647867 | orchestrator | 2025-07-06 20:12:00.647872 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.647888 | orchestrator | Sunday 06 July 2025 20:09:25 +0000 (0:00:00.835) 0:08:11.419 *********** 2025-07-06 20:12:00.647893 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.647897 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.647902 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.647906 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.647911 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.647915 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.647920 | orchestrator | 2025-07-06 20:12:00.647925 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.647929 | orchestrator | Sunday 06 July 2025 20:09:26 +0000 (0:00:01.030) 0:08:12.449 *********** 2025-07-06 20:12:00.647934 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.647938 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.647943 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.647947 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.647952 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.647956 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.647960 | orchestrator | 2025-07-06 20:12:00.647965 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.647970 | orchestrator | Sunday 06 July 2025 20:09:27 +0000 (0:00:01.278) 0:08:13.727 *********** 2025-07-06 20:12:00.647974 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.647979 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.647983 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.647988 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.647992 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.647997 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.648001 | orchestrator | 2025-07-06 20:12:00.648006 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.648011 | orchestrator | Sunday 06 July 2025 20:09:28 +0000 (0:00:00.556) 0:08:14.284 *********** 2025-07-06 20:12:00.648015 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.648020 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.648024 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.648029 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648033 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.648038 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.648042 | orchestrator | 2025-07-06 20:12:00.648047 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.648051 | orchestrator | Sunday 06 July 2025 20:09:28 +0000 (0:00:00.748) 0:08:15.032 *********** 2025-07-06 20:12:00.648056 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648061 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648065 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648070 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.648074 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.648079 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.648084 | orchestrator | 2025-07-06 20:12:00.648088 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.648093 | orchestrator | Sunday 06 July 2025 20:09:29 +0000 (0:00:00.515) 0:08:15.547 *********** 2025-07-06 20:12:00.648098 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648102 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648107 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648111 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.648116 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.648120 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.648125 | orchestrator | 2025-07-06 20:12:00.648129 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.648134 | orchestrator | Sunday 06 July 2025 20:09:29 +0000 (0:00:00.632) 0:08:16.179 *********** 2025-07-06 20:12:00.648138 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648143 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648147 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648152 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.648161 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.648166 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.648170 | orchestrator | 2025-07-06 20:12:00.648175 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.648180 | orchestrator | Sunday 06 July 2025 20:09:30 +0000 (0:00:00.513) 0:08:16.693 *********** 2025-07-06 20:12:00.648184 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.648189 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.648193 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.648198 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.648202 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.648207 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.648211 | orchestrator | 2025-07-06 20:12:00.648238 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.648243 | orchestrator | Sunday 06 July 2025 20:09:31 +0000 (0:00:00.666) 0:08:17.359 *********** 2025-07-06 20:12:00.648248 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.648252 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.648257 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.648261 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:00.648266 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:00.648270 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:00.648275 | orchestrator | 2025-07-06 20:12:00.648280 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.648284 | orchestrator | Sunday 06 July 2025 20:09:31 +0000 (0:00:00.523) 0:08:17.883 *********** 2025-07-06 20:12:00.648289 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.648293 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.648298 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.648302 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648307 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.648312 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.648316 | orchestrator | 2025-07-06 20:12:00.648323 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.648328 | orchestrator | Sunday 06 July 2025 20:09:32 +0000 (0:00:00.782) 0:08:18.665 *********** 2025-07-06 20:12:00.648333 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648337 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648342 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648346 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648351 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.648355 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.648360 | orchestrator | 2025-07-06 20:12:00.648364 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.648369 | orchestrator | Sunday 06 July 2025 20:09:33 +0000 (0:00:00.633) 0:08:19.299 *********** 2025-07-06 20:12:00.648373 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648389 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648393 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648398 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648403 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.648407 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.648411 | orchestrator | 2025-07-06 20:12:00.648416 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-06 20:12:00.648421 | orchestrator | Sunday 06 July 2025 20:09:34 +0000 (0:00:01.213) 0:08:20.513 *********** 2025-07-06 20:12:00.648425 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.648430 | orchestrator | 2025-07-06 20:12:00.648434 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-06 20:12:00.648439 | orchestrator | Sunday 06 July 2025 20:09:38 +0000 (0:00:04.134) 0:08:24.647 *********** 2025-07-06 20:12:00.648444 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.648448 | orchestrator | 2025-07-06 20:12:00.648453 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-06 20:12:00.648462 | orchestrator | Sunday 06 July 2025 20:09:40 +0000 (0:00:02.052) 0:08:26.699 *********** 2025-07-06 20:12:00.648466 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.648471 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.648475 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.648480 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648485 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.648489 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.648494 | orchestrator | 2025-07-06 20:12:00.648498 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-06 20:12:00.648503 | orchestrator | Sunday 06 July 2025 20:09:42 +0000 (0:00:01.865) 0:08:28.565 *********** 2025-07-06 20:12:00.648507 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.648512 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.648517 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.648521 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.648526 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.648530 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.648535 | orchestrator | 2025-07-06 20:12:00.648542 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-06 20:12:00.648547 | orchestrator | Sunday 06 July 2025 20:09:43 +0000 (0:00:01.036) 0:08:29.602 *********** 2025-07-06 20:12:00.648551 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.648556 | orchestrator | 2025-07-06 20:12:00.648561 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-06 20:12:00.648566 | orchestrator | Sunday 06 July 2025 20:09:44 +0000 (0:00:01.216) 0:08:30.818 *********** 2025-07-06 20:12:00.648570 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.648575 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.648579 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.648584 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.648588 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.648593 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.648597 | orchestrator | 2025-07-06 20:12:00.648602 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-06 20:12:00.648606 | orchestrator | Sunday 06 July 2025 20:09:46 +0000 (0:00:01.766) 0:08:32.584 *********** 2025-07-06 20:12:00.648611 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.648615 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.648620 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.648624 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.648629 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.648633 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.648638 | orchestrator | 2025-07-06 20:12:00.648642 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-06 20:12:00.648647 | orchestrator | Sunday 06 July 2025 20:09:50 +0000 (0:00:03.889) 0:08:36.474 *********** 2025-07-06 20:12:00.648652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:00.648656 | orchestrator | 2025-07-06 20:12:00.648661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-06 20:12:00.648666 | orchestrator | Sunday 06 July 2025 20:09:51 +0000 (0:00:01.228) 0:08:37.702 *********** 2025-07-06 20:12:00.648670 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648675 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648679 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648684 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648689 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.648693 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.648698 | orchestrator | 2025-07-06 20:12:00.648702 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-06 20:12:00.648711 | orchestrator | Sunday 06 July 2025 20:09:52 +0000 (0:00:00.805) 0:08:38.507 *********** 2025-07-06 20:12:00.648716 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.648720 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.648725 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.648730 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:00.648734 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:00.648739 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:00.648743 | orchestrator | 2025-07-06 20:12:00.648750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-06 20:12:00.648755 | orchestrator | Sunday 06 July 2025 20:09:54 +0000 (0:00:02.158) 0:08:40.666 *********** 2025-07-06 20:12:00.648759 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648764 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648769 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648773 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:00.648778 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:00.648782 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:00.648787 | orchestrator | 2025-07-06 20:12:00.648791 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-06 20:12:00.648796 | orchestrator | 2025-07-06 20:12:00.648801 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.648805 | orchestrator | Sunday 06 July 2025 20:09:55 +0000 (0:00:01.142) 0:08:41.809 *********** 2025-07-06 20:12:00.648810 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.648816 | orchestrator | 2025-07-06 20:12:00.648824 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.648829 | orchestrator | Sunday 06 July 2025 20:09:56 +0000 (0:00:00.508) 0:08:42.317 *********** 2025-07-06 20:12:00.648835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.648843 | orchestrator | 2025-07-06 20:12:00.648848 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.648853 | orchestrator | Sunday 06 July 2025 20:09:56 +0000 (0:00:00.785) 0:08:43.103 *********** 2025-07-06 20:12:00.648857 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.648862 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.648866 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.648872 | orchestrator | 2025-07-06 20:12:00.648880 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.648885 | orchestrator | Sunday 06 July 2025 20:09:57 +0000 (0:00:00.307) 0:08:43.410 *********** 2025-07-06 20:12:00.648890 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648894 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648899 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648903 | orchestrator | 2025-07-06 20:12:00.648908 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.648913 | orchestrator | Sunday 06 July 2025 20:09:57 +0000 (0:00:00.669) 0:08:44.080 *********** 2025-07-06 20:12:00.648917 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648922 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648930 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648936 | orchestrator | 2025-07-06 20:12:00.648940 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.648945 | orchestrator | Sunday 06 July 2025 20:09:58 +0000 (0:00:01.010) 0:08:45.091 *********** 2025-07-06 20:12:00.648952 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.648957 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.648961 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.648966 | orchestrator | 2025-07-06 20:12:00.648972 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.648980 | orchestrator | Sunday 06 July 2025 20:09:59 +0000 (0:00:00.728) 0:08:45.820 *********** 2025-07-06 20:12:00.648989 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.648993 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.648998 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649002 | orchestrator | 2025-07-06 20:12:00.649007 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.649012 | orchestrator | Sunday 06 July 2025 20:09:59 +0000 (0:00:00.297) 0:08:46.117 *********** 2025-07-06 20:12:00.649016 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649021 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649025 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649030 | orchestrator | 2025-07-06 20:12:00.649034 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.649039 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:00.294) 0:08:46.412 *********** 2025-07-06 20:12:00.649044 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649048 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649053 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649057 | orchestrator | 2025-07-06 20:12:00.649062 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.649066 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:00.579) 0:08:46.992 *********** 2025-07-06 20:12:00.649071 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649075 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649080 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649085 | orchestrator | 2025-07-06 20:12:00.649089 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.649094 | orchestrator | Sunday 06 July 2025 20:10:01 +0000 (0:00:00.690) 0:08:47.682 *********** 2025-07-06 20:12:00.649098 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649103 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649108 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649112 | orchestrator | 2025-07-06 20:12:00.649117 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.649121 | orchestrator | Sunday 06 July 2025 20:10:02 +0000 (0:00:00.691) 0:08:48.374 *********** 2025-07-06 20:12:00.649126 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649130 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649135 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649139 | orchestrator | 2025-07-06 20:12:00.649144 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.649149 | orchestrator | Sunday 06 July 2025 20:10:02 +0000 (0:00:00.278) 0:08:48.653 *********** 2025-07-06 20:12:00.649153 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649158 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649162 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649167 | orchestrator | 2025-07-06 20:12:00.649171 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.649179 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:00.569) 0:08:49.222 *********** 2025-07-06 20:12:00.649183 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649188 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649193 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649197 | orchestrator | 2025-07-06 20:12:00.649202 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.649206 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:00.336) 0:08:49.558 *********** 2025-07-06 20:12:00.649211 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649215 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649220 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649225 | orchestrator | 2025-07-06 20:12:00.649229 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.649234 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:00.315) 0:08:49.874 *********** 2025-07-06 20:12:00.649238 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649247 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649251 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649256 | orchestrator | 2025-07-06 20:12:00.649260 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.649265 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:00.310) 0:08:50.185 *********** 2025-07-06 20:12:00.649270 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649274 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649279 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649283 | orchestrator | 2025-07-06 20:12:00.649288 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.649293 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:00.561) 0:08:50.747 *********** 2025-07-06 20:12:00.649297 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649302 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649306 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649311 | orchestrator | 2025-07-06 20:12:00.649315 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.649320 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:00.294) 0:08:51.041 *********** 2025-07-06 20:12:00.649325 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649329 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649334 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649338 | orchestrator | 2025-07-06 20:12:00.649343 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.649347 | orchestrator | Sunday 06 July 2025 20:10:05 +0000 (0:00:00.303) 0:08:51.345 *********** 2025-07-06 20:12:00.649352 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649356 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649361 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649366 | orchestrator | 2025-07-06 20:12:00.649370 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.649385 | orchestrator | Sunday 06 July 2025 20:10:05 +0000 (0:00:00.351) 0:08:51.696 *********** 2025-07-06 20:12:00.649390 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649395 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649402 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649407 | orchestrator | 2025-07-06 20:12:00.649412 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-06 20:12:00.649416 | orchestrator | Sunday 06 July 2025 20:10:06 +0000 (0:00:00.767) 0:08:52.464 *********** 2025-07-06 20:12:00.649421 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649425 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649430 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-06 20:12:00.649435 | orchestrator | 2025-07-06 20:12:00.649439 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-06 20:12:00.649444 | orchestrator | Sunday 06 July 2025 20:10:06 +0000 (0:00:00.376) 0:08:52.840 *********** 2025-07-06 20:12:00.649448 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.649453 | orchestrator | 2025-07-06 20:12:00.649458 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-06 20:12:00.649462 | orchestrator | Sunday 06 July 2025 20:10:08 +0000 (0:00:02.290) 0:08:55.131 *********** 2025-07-06 20:12:00.649467 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-06 20:12:00.649474 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649478 | orchestrator | 2025-07-06 20:12:00.649483 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-06 20:12:00.649488 | orchestrator | Sunday 06 July 2025 20:10:09 +0000 (0:00:00.205) 0:08:55.336 *********** 2025-07-06 20:12:00.649493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:12:00.649506 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:12:00.649511 | orchestrator | 2025-07-06 20:12:00.649516 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-06 20:12:00.649520 | orchestrator | Sunday 06 July 2025 20:10:18 +0000 (0:00:09.066) 0:09:04.403 *********** 2025-07-06 20:12:00.649525 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:12:00.649529 | orchestrator | 2025-07-06 20:12:00.649534 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-06 20:12:00.649541 | orchestrator | Sunday 06 July 2025 20:10:21 +0000 (0:00:03.653) 0:09:08.056 *********** 2025-07-06 20:12:00.649546 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.649550 | orchestrator | 2025-07-06 20:12:00.649555 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-06 20:12:00.649559 | orchestrator | Sunday 06 July 2025 20:10:22 +0000 (0:00:00.521) 0:09:08.578 *********** 2025-07-06 20:12:00.649564 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-06 20:12:00.649569 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-06 20:12:00.649573 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-06 20:12:00.649578 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-06 20:12:00.649586 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-06 20:12:00.649593 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-06 20:12:00.649601 | orchestrator | 2025-07-06 20:12:00.649609 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-06 20:12:00.649616 | orchestrator | Sunday 06 July 2025 20:10:23 +0000 (0:00:01.035) 0:09:09.614 *********** 2025-07-06 20:12:00.649624 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.649631 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:12:00.649639 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:12:00.649646 | orchestrator | 2025-07-06 20:12:00.649654 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:12:00.649662 | orchestrator | Sunday 06 July 2025 20:10:25 +0000 (0:00:02.385) 0:09:11.999 *********** 2025-07-06 20:12:00.649670 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:12:00.649677 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:12:00.649684 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649691 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:12:00.649700 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-06 20:12:00.649707 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649715 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:12:00.649719 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-06 20:12:00.649724 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649728 | orchestrator | 2025-07-06 20:12:00.649733 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-06 20:12:00.649738 | orchestrator | Sunday 06 July 2025 20:10:27 +0000 (0:00:01.679) 0:09:13.679 *********** 2025-07-06 20:12:00.649742 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649747 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649754 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649764 | orchestrator | 2025-07-06 20:12:00.649769 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-06 20:12:00.649773 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:02.813) 0:09:16.492 *********** 2025-07-06 20:12:00.649778 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.649782 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.649787 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.649791 | orchestrator | 2025-07-06 20:12:00.649796 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-06 20:12:00.649801 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:00.342) 0:09:16.835 *********** 2025-07-06 20:12:00.649805 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.649810 | orchestrator | 2025-07-06 20:12:00.649814 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-06 20:12:00.649819 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.853) 0:09:17.689 *********** 2025-07-06 20:12:00.649824 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.649828 | orchestrator | 2025-07-06 20:12:00.649833 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-06 20:12:00.649837 | orchestrator | Sunday 06 July 2025 20:10:32 +0000 (0:00:00.550) 0:09:18.240 *********** 2025-07-06 20:12:00.649842 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649846 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649851 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649855 | orchestrator | 2025-07-06 20:12:00.649860 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-06 20:12:00.649865 | orchestrator | Sunday 06 July 2025 20:10:33 +0000 (0:00:01.367) 0:09:19.607 *********** 2025-07-06 20:12:00.649869 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649874 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649878 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649883 | orchestrator | 2025-07-06 20:12:00.649887 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-06 20:12:00.649892 | orchestrator | Sunday 06 July 2025 20:10:34 +0000 (0:00:01.534) 0:09:21.141 *********** 2025-07-06 20:12:00.649896 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649901 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649905 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649910 | orchestrator | 2025-07-06 20:12:00.649914 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-06 20:12:00.649919 | orchestrator | Sunday 06 July 2025 20:10:36 +0000 (0:00:01.919) 0:09:23.061 *********** 2025-07-06 20:12:00.649924 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649928 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649933 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649937 | orchestrator | 2025-07-06 20:12:00.649942 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-06 20:12:00.649946 | orchestrator | Sunday 06 July 2025 20:10:38 +0000 (0:00:02.007) 0:09:25.068 *********** 2025-07-06 20:12:00.649955 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.649959 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.649964 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.649969 | orchestrator | 2025-07-06 20:12:00.649973 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:12:00.649978 | orchestrator | Sunday 06 July 2025 20:10:40 +0000 (0:00:01.700) 0:09:26.768 *********** 2025-07-06 20:12:00.649982 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.649987 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.649992 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.649996 | orchestrator | 2025-07-06 20:12:00.650001 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-06 20:12:00.650009 | orchestrator | Sunday 06 July 2025 20:10:41 +0000 (0:00:00.755) 0:09:27.524 *********** 2025-07-06 20:12:00.650042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.650048 | orchestrator | 2025-07-06 20:12:00.650053 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-06 20:12:00.650057 | orchestrator | Sunday 06 July 2025 20:10:42 +0000 (0:00:00.838) 0:09:28.362 *********** 2025-07-06 20:12:00.650062 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650066 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650071 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650075 | orchestrator | 2025-07-06 20:12:00.650080 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-06 20:12:00.650085 | orchestrator | Sunday 06 July 2025 20:10:42 +0000 (0:00:00.371) 0:09:28.734 *********** 2025-07-06 20:12:00.650089 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.650094 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.650098 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.650103 | orchestrator | 2025-07-06 20:12:00.650107 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-06 20:12:00.650112 | orchestrator | Sunday 06 July 2025 20:10:43 +0000 (0:00:01.214) 0:09:29.948 *********** 2025-07-06 20:12:00.650117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.650121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.650126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.650130 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650135 | orchestrator | 2025-07-06 20:12:00.650139 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-06 20:12:00.650144 | orchestrator | Sunday 06 July 2025 20:10:44 +0000 (0:00:00.950) 0:09:30.899 *********** 2025-07-06 20:12:00.650148 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650153 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650157 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650162 | orchestrator | 2025-07-06 20:12:00.650167 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-06 20:12:00.650171 | orchestrator | 2025-07-06 20:12:00.650178 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:12:00.650183 | orchestrator | Sunday 06 July 2025 20:10:45 +0000 (0:00:00.913) 0:09:31.812 *********** 2025-07-06 20:12:00.650188 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.650192 | orchestrator | 2025-07-06 20:12:00.650197 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:12:00.650202 | orchestrator | Sunday 06 July 2025 20:10:46 +0000 (0:00:00.630) 0:09:32.443 *********** 2025-07-06 20:12:00.650206 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.650211 | orchestrator | 2025-07-06 20:12:00.650215 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:12:00.650220 | orchestrator | Sunday 06 July 2025 20:10:47 +0000 (0:00:00.893) 0:09:33.336 *********** 2025-07-06 20:12:00.650224 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650229 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650234 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650238 | orchestrator | 2025-07-06 20:12:00.650243 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:12:00.650247 | orchestrator | Sunday 06 July 2025 20:10:47 +0000 (0:00:00.353) 0:09:33.690 *********** 2025-07-06 20:12:00.650252 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650256 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650261 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650265 | orchestrator | 2025-07-06 20:12:00.650270 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:12:00.650279 | orchestrator | Sunday 06 July 2025 20:10:48 +0000 (0:00:00.697) 0:09:34.388 *********** 2025-07-06 20:12:00.650284 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650288 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650293 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650297 | orchestrator | 2025-07-06 20:12:00.650302 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:12:00.650306 | orchestrator | Sunday 06 July 2025 20:10:48 +0000 (0:00:00.723) 0:09:35.111 *********** 2025-07-06 20:12:00.650311 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650315 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650320 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650324 | orchestrator | 2025-07-06 20:12:00.650329 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:12:00.650334 | orchestrator | Sunday 06 July 2025 20:10:50 +0000 (0:00:01.132) 0:09:36.244 *********** 2025-07-06 20:12:00.650338 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650343 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650347 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650352 | orchestrator | 2025-07-06 20:12:00.650356 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:12:00.650361 | orchestrator | Sunday 06 July 2025 20:10:50 +0000 (0:00:00.327) 0:09:36.571 *********** 2025-07-06 20:12:00.650366 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650370 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650408 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650414 | orchestrator | 2025-07-06 20:12:00.650419 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:12:00.650424 | orchestrator | Sunday 06 July 2025 20:10:50 +0000 (0:00:00.385) 0:09:36.956 *********** 2025-07-06 20:12:00.650428 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650433 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650437 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650442 | orchestrator | 2025-07-06 20:12:00.650447 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:12:00.650451 | orchestrator | Sunday 06 July 2025 20:10:51 +0000 (0:00:00.399) 0:09:37.356 *********** 2025-07-06 20:12:00.650456 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650460 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650465 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650469 | orchestrator | 2025-07-06 20:12:00.650474 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:12:00.650479 | orchestrator | Sunday 06 July 2025 20:10:52 +0000 (0:00:01.248) 0:09:38.605 *********** 2025-07-06 20:12:00.650483 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650488 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650492 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650497 | orchestrator | 2025-07-06 20:12:00.650502 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:12:00.650506 | orchestrator | Sunday 06 July 2025 20:10:53 +0000 (0:00:00.669) 0:09:39.275 *********** 2025-07-06 20:12:00.650511 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650515 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650520 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650524 | orchestrator | 2025-07-06 20:12:00.650529 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:12:00.650534 | orchestrator | Sunday 06 July 2025 20:10:53 +0000 (0:00:00.291) 0:09:39.566 *********** 2025-07-06 20:12:00.650538 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650543 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650547 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650552 | orchestrator | 2025-07-06 20:12:00.650556 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:12:00.650561 | orchestrator | Sunday 06 July 2025 20:10:53 +0000 (0:00:00.297) 0:09:39.864 *********** 2025-07-06 20:12:00.650569 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650574 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650578 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650583 | orchestrator | 2025-07-06 20:12:00.650588 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:12:00.650592 | orchestrator | Sunday 06 July 2025 20:10:54 +0000 (0:00:00.610) 0:09:40.475 *********** 2025-07-06 20:12:00.650597 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650601 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650606 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650610 | orchestrator | 2025-07-06 20:12:00.650615 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:12:00.650623 | orchestrator | Sunday 06 July 2025 20:10:54 +0000 (0:00:00.337) 0:09:40.812 *********** 2025-07-06 20:12:00.650628 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650632 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650637 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650641 | orchestrator | 2025-07-06 20:12:00.650646 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:12:00.650651 | orchestrator | Sunday 06 July 2025 20:10:54 +0000 (0:00:00.315) 0:09:41.128 *********** 2025-07-06 20:12:00.650655 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650660 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650664 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650669 | orchestrator | 2025-07-06 20:12:00.650674 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:12:00.650678 | orchestrator | Sunday 06 July 2025 20:10:55 +0000 (0:00:00.305) 0:09:41.433 *********** 2025-07-06 20:12:00.650683 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650687 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650692 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650696 | orchestrator | 2025-07-06 20:12:00.650701 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:12:00.650705 | orchestrator | Sunday 06 July 2025 20:10:55 +0000 (0:00:00.602) 0:09:42.036 *********** 2025-07-06 20:12:00.650710 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650715 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650719 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650724 | orchestrator | 2025-07-06 20:12:00.650728 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:12:00.650733 | orchestrator | Sunday 06 July 2025 20:10:56 +0000 (0:00:00.417) 0:09:42.454 *********** 2025-07-06 20:12:00.650737 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650742 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650746 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650751 | orchestrator | 2025-07-06 20:12:00.650756 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:12:00.650760 | orchestrator | Sunday 06 July 2025 20:10:56 +0000 (0:00:00.377) 0:09:42.831 *********** 2025-07-06 20:12:00.650765 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.650769 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.650774 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.650778 | orchestrator | 2025-07-06 20:12:00.650783 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-06 20:12:00.650788 | orchestrator | Sunday 06 July 2025 20:10:57 +0000 (0:00:00.895) 0:09:43.727 *********** 2025-07-06 20:12:00.650792 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.650797 | orchestrator | 2025-07-06 20:12:00.650801 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-06 20:12:00.650806 | orchestrator | Sunday 06 July 2025 20:10:58 +0000 (0:00:00.556) 0:09:44.283 *********** 2025-07-06 20:12:00.650811 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.650819 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:12:00.650827 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:12:00.650831 | orchestrator | 2025-07-06 20:12:00.650836 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:12:00.650840 | orchestrator | Sunday 06 July 2025 20:11:00 +0000 (0:00:02.217) 0:09:46.500 *********** 2025-07-06 20:12:00.650845 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:12:00.650850 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:12:00.650854 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.650859 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:12:00.650863 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-06 20:12:00.650868 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.650873 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:12:00.650877 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-06 20:12:00.650882 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.650886 | orchestrator | 2025-07-06 20:12:00.650891 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-06 20:12:00.650896 | orchestrator | Sunday 06 July 2025 20:11:01 +0000 (0:00:01.420) 0:09:47.920 *********** 2025-07-06 20:12:00.650900 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.650905 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.650909 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.650914 | orchestrator | 2025-07-06 20:12:00.650918 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-06 20:12:00.650923 | orchestrator | Sunday 06 July 2025 20:11:02 +0000 (0:00:00.354) 0:09:48.275 *********** 2025-07-06 20:12:00.650928 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.650932 | orchestrator | 2025-07-06 20:12:00.650937 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-06 20:12:00.650941 | orchestrator | Sunday 06 July 2025 20:11:02 +0000 (0:00:00.675) 0:09:48.951 *********** 2025-07-06 20:12:00.650945 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.650950 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.650954 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.650958 | orchestrator | 2025-07-06 20:12:00.650962 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-06 20:12:00.650969 | orchestrator | Sunday 06 July 2025 20:11:04 +0000 (0:00:01.292) 0:09:50.243 *********** 2025-07-06 20:12:00.650974 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.650978 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-06 20:12:00.650982 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.650986 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-06 20:12:00.650990 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.650995 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-06 20:12:00.650999 | orchestrator | 2025-07-06 20:12:00.651003 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-06 20:12:00.651007 | orchestrator | Sunday 06 July 2025 20:11:08 +0000 (0:00:04.682) 0:09:54.926 *********** 2025-07-06 20:12:00.651015 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.651019 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:12:00.651023 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.651027 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:12:00.651031 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:12:00.651036 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:12:00.651040 | orchestrator | 2025-07-06 20:12:00.651044 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:12:00.651048 | orchestrator | Sunday 06 July 2025 20:11:11 +0000 (0:00:02.274) 0:09:57.200 *********** 2025-07-06 20:12:00.651052 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:12:00.651056 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.651060 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:12:00.651065 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.651069 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:12:00.651073 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.651077 | orchestrator | 2025-07-06 20:12:00.651081 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-06 20:12:00.651085 | orchestrator | Sunday 06 July 2025 20:11:12 +0000 (0:00:01.184) 0:09:58.385 *********** 2025-07-06 20:12:00.651089 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-06 20:12:00.651093 | orchestrator | 2025-07-06 20:12:00.651098 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-06 20:12:00.651104 | orchestrator | Sunday 06 July 2025 20:11:12 +0000 (0:00:00.216) 0:09:58.602 *********** 2025-07-06 20:12:00.651108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651130 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651134 | orchestrator | 2025-07-06 20:12:00.651138 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-06 20:12:00.651142 | orchestrator | Sunday 06 July 2025 20:11:13 +0000 (0:00:01.027) 0:09:59.629 *********** 2025-07-06 20:12:00.651146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:12:00.651167 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651171 | orchestrator | 2025-07-06 20:12:00.651175 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-06 20:12:00.651183 | orchestrator | Sunday 06 July 2025 20:11:14 +0000 (0:00:00.606) 0:10:00.236 *********** 2025-07-06 20:12:00.651190 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:12:00.651194 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:12:00.651199 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:12:00.651203 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:12:00.651207 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:12:00.651211 | orchestrator | 2025-07-06 20:12:00.651216 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-06 20:12:00.651220 | orchestrator | Sunday 06 July 2025 20:11:44 +0000 (0:00:30.901) 0:10:31.137 *********** 2025-07-06 20:12:00.651224 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651228 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.651232 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.651236 | orchestrator | 2025-07-06 20:12:00.651240 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-06 20:12:00.651245 | orchestrator | Sunday 06 July 2025 20:11:45 +0000 (0:00:00.289) 0:10:31.427 *********** 2025-07-06 20:12:00.651249 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651253 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.651257 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.651261 | orchestrator | 2025-07-06 20:12:00.651265 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-06 20:12:00.651269 | orchestrator | Sunday 06 July 2025 20:11:45 +0000 (0:00:00.294) 0:10:31.721 *********** 2025-07-06 20:12:00.651273 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.651278 | orchestrator | 2025-07-06 20:12:00.651282 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-06 20:12:00.651286 | orchestrator | Sunday 06 July 2025 20:11:46 +0000 (0:00:00.770) 0:10:32.492 *********** 2025-07-06 20:12:00.651290 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.651294 | orchestrator | 2025-07-06 20:12:00.651298 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-06 20:12:00.651302 | orchestrator | Sunday 06 July 2025 20:11:46 +0000 (0:00:00.524) 0:10:33.016 *********** 2025-07-06 20:12:00.651307 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.651311 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.651315 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.651319 | orchestrator | 2025-07-06 20:12:00.651325 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-06 20:12:00.651329 | orchestrator | Sunday 06 July 2025 20:11:48 +0000 (0:00:01.236) 0:10:34.253 *********** 2025-07-06 20:12:00.651333 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.651338 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.651342 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.651346 | orchestrator | 2025-07-06 20:12:00.651350 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-06 20:12:00.651354 | orchestrator | Sunday 06 July 2025 20:11:49 +0000 (0:00:01.395) 0:10:35.649 *********** 2025-07-06 20:12:00.651358 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:00.651366 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:00.651370 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:00.651374 | orchestrator | 2025-07-06 20:12:00.651391 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-06 20:12:00.651395 | orchestrator | Sunday 06 July 2025 20:11:51 +0000 (0:00:01.930) 0:10:37.580 *********** 2025-07-06 20:12:00.651399 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.651403 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.651408 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:12:00.651412 | orchestrator | 2025-07-06 20:12:00.651416 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:12:00.651420 | orchestrator | Sunday 06 July 2025 20:11:55 +0000 (0:00:04.018) 0:10:41.598 *********** 2025-07-06 20:12:00.651424 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651428 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.651433 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.651437 | orchestrator | 2025-07-06 20:12:00.651441 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-06 20:12:00.651445 | orchestrator | Sunday 06 July 2025 20:11:55 +0000 (0:00:00.376) 0:10:41.975 *********** 2025-07-06 20:12:00.651449 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:00.651453 | orchestrator | 2025-07-06 20:12:00.651457 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-06 20:12:00.651461 | orchestrator | Sunday 06 July 2025 20:11:56 +0000 (0:00:00.542) 0:10:42.517 *********** 2025-07-06 20:12:00.651466 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.651470 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.651474 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.651478 | orchestrator | 2025-07-06 20:12:00.651485 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-06 20:12:00.651489 | orchestrator | Sunday 06 July 2025 20:11:57 +0000 (0:00:00.682) 0:10:43.199 *********** 2025-07-06 20:12:00.651493 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651497 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:12:00.651501 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:12:00.651505 | orchestrator | 2025-07-06 20:12:00.651509 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-06 20:12:00.651514 | orchestrator | Sunday 06 July 2025 20:11:57 +0000 (0:00:00.342) 0:10:43.542 *********** 2025-07-06 20:12:00.651518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:12:00.651522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:12:00.651526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:12:00.651530 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:12:00.651534 | orchestrator | 2025-07-06 20:12:00.651538 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-06 20:12:00.651542 | orchestrator | Sunday 06 July 2025 20:11:57 +0000 (0:00:00.589) 0:10:44.131 *********** 2025-07-06 20:12:00.651547 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:00.651551 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:00.651555 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:00.651559 | orchestrator | 2025-07-06 20:12:00.651563 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:12:00.651568 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-07-06 20:12:00.651572 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-06 20:12:00.651581 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-06 20:12:00.651585 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-07-06 20:12:00.651589 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-06 20:12:00.651593 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-06 20:12:00.651597 | orchestrator | 2025-07-06 20:12:00.651602 | orchestrator | 2025-07-06 20:12:00.651606 | orchestrator | 2025-07-06 20:12:00.651610 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:12:00.651614 | orchestrator | Sunday 06 July 2025 20:11:58 +0000 (0:00:00.256) 0:10:44.387 *********** 2025-07-06 20:12:00.651620 | orchestrator | =============================================================================== 2025-07-06 20:12:00.651625 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 74.59s 2025-07-06 20:12:00.651629 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.64s 2025-07-06 20:12:00.651633 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.90s 2025-07-06 20:12:00.651637 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.91s 2025-07-06 20:12:00.651641 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.75s 2025-07-06 20:12:00.651645 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.95s 2025-07-06 20:12:00.651649 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.55s 2025-07-06 20:12:00.651654 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.27s 2025-07-06 20:12:00.651658 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.07s 2025-07-06 20:12:00.651662 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.23s 2025-07-06 20:12:00.651666 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.12s 2025-07-06 20:12:00.651670 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.85s 2025-07-06 20:12:00.651674 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.68s 2025-07-06 20:12:00.651678 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.13s 2025-07-06 20:12:00.651682 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.09s 2025-07-06 20:12:00.651686 | orchestrator | ceph-rgw : Systemd start rgw container ---------------------------------- 4.02s 2025-07-06 20:12:00.651691 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.89s 2025-07-06 20:12:00.651695 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.65s 2025-07-06 20:12:00.651699 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.54s 2025-07-06 20:12:00.651703 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.19s 2025-07-06 20:12:00.651707 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:00.651711 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:00.651718 | orchestrator | 2025-07-06 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:03.683149 | orchestrator | 2025-07-06 20:12:03 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:03.684573 | orchestrator | 2025-07-06 20:12:03 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:03.686954 | orchestrator | 2025-07-06 20:12:03 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:03.687049 | orchestrator | 2025-07-06 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:06.725965 | orchestrator | 2025-07-06 20:12:06 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:06.727168 | orchestrator | 2025-07-06 20:12:06 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:06.729122 | orchestrator | 2025-07-06 20:12:06 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:06.729448 | orchestrator | 2025-07-06 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:09.782693 | orchestrator | 2025-07-06 20:12:09 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:09.783990 | orchestrator | 2025-07-06 20:12:09 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:09.785542 | orchestrator | 2025-07-06 20:12:09 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:09.785583 | orchestrator | 2025-07-06 20:12:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:12.839599 | orchestrator | 2025-07-06 20:12:12 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:12.840662 | orchestrator | 2025-07-06 20:12:12 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:12.843407 | orchestrator | 2025-07-06 20:12:12 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:12.843453 | orchestrator | 2025-07-06 20:12:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:15.884848 | orchestrator | 2025-07-06 20:12:15 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:15.885967 | orchestrator | 2025-07-06 20:12:15 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:15.887655 | orchestrator | 2025-07-06 20:12:15 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:15.887719 | orchestrator | 2025-07-06 20:12:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:18.946993 | orchestrator | 2025-07-06 20:12:18 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:18.947733 | orchestrator | 2025-07-06 20:12:18 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:18.949209 | orchestrator | 2025-07-06 20:12:18 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:18.949242 | orchestrator | 2025-07-06 20:12:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:21.996192 | orchestrator | 2025-07-06 20:12:21 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:21.998239 | orchestrator | 2025-07-06 20:12:21 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:21.999740 | orchestrator | 2025-07-06 20:12:21 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:21.999825 | orchestrator | 2025-07-06 20:12:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:25.056538 | orchestrator | 2025-07-06 20:12:25 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:25.058292 | orchestrator | 2025-07-06 20:12:25 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:25.059447 | orchestrator | 2025-07-06 20:12:25 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:25.059817 | orchestrator | 2025-07-06 20:12:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:28.105840 | orchestrator | 2025-07-06 20:12:28 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:28.107230 | orchestrator | 2025-07-06 20:12:28 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:28.109300 | orchestrator | 2025-07-06 20:12:28 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:28.109401 | orchestrator | 2025-07-06 20:12:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:31.159730 | orchestrator | 2025-07-06 20:12:31 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:31.160261 | orchestrator | 2025-07-06 20:12:31 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:31.161649 | orchestrator | 2025-07-06 20:12:31 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:31.161675 | orchestrator | 2025-07-06 20:12:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:34.210984 | orchestrator | 2025-07-06 20:12:34 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:34.213077 | orchestrator | 2025-07-06 20:12:34 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:34.215092 | orchestrator | 2025-07-06 20:12:34 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:34.215152 | orchestrator | 2025-07-06 20:12:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:37.256879 | orchestrator | 2025-07-06 20:12:37 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:37.259062 | orchestrator | 2025-07-06 20:12:37 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:37.261259 | orchestrator | 2025-07-06 20:12:37 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:37.261304 | orchestrator | 2025-07-06 20:12:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:40.309797 | orchestrator | 2025-07-06 20:12:40 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:40.312141 | orchestrator | 2025-07-06 20:12:40 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:40.313219 | orchestrator | 2025-07-06 20:12:40 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:40.313335 | orchestrator | 2025-07-06 20:12:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:43.362968 | orchestrator | 2025-07-06 20:12:43 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:43.364447 | orchestrator | 2025-07-06 20:12:43 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:43.365733 | orchestrator | 2025-07-06 20:12:43 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:43.365768 | orchestrator | 2025-07-06 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:46.415893 | orchestrator | 2025-07-06 20:12:46 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:46.420142 | orchestrator | 2025-07-06 20:12:46 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:46.423561 | orchestrator | 2025-07-06 20:12:46 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:46.423641 | orchestrator | 2025-07-06 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:49.475709 | orchestrator | 2025-07-06 20:12:49 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:49.478560 | orchestrator | 2025-07-06 20:12:49 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:49.480063 | orchestrator | 2025-07-06 20:12:49 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:49.480197 | orchestrator | 2025-07-06 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:52.533808 | orchestrator | 2025-07-06 20:12:52 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:52.535373 | orchestrator | 2025-07-06 20:12:52 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:52.539629 | orchestrator | 2025-07-06 20:12:52 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:52.539693 | orchestrator | 2025-07-06 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:55.590494 | orchestrator | 2025-07-06 20:12:55 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:55.591859 | orchestrator | 2025-07-06 20:12:55 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:55.593991 | orchestrator | 2025-07-06 20:12:55 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:55.594671 | orchestrator | 2025-07-06 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:58.643925 | orchestrator | 2025-07-06 20:12:58 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:12:58.646454 | orchestrator | 2025-07-06 20:12:58 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:12:58.648637 | orchestrator | 2025-07-06 20:12:58 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:12:58.648686 | orchestrator | 2025-07-06 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:01.690446 | orchestrator | 2025-07-06 20:13:01 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:01.691240 | orchestrator | 2025-07-06 20:13:01 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:13:01.692882 | orchestrator | 2025-07-06 20:13:01 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:01.692904 | orchestrator | 2025-07-06 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:04.739573 | orchestrator | 2025-07-06 20:13:04 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:04.741082 | orchestrator | 2025-07-06 20:13:04 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:13:04.743002 | orchestrator | 2025-07-06 20:13:04 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:04.743040 | orchestrator | 2025-07-06 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:07.795078 | orchestrator | 2025-07-06 20:13:07 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:07.795183 | orchestrator | 2025-07-06 20:13:07 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:13:07.795200 | orchestrator | 2025-07-06 20:13:07 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:07.795212 | orchestrator | 2025-07-06 20:13:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:10.848480 | orchestrator | 2025-07-06 20:13:10 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:10.850118 | orchestrator | 2025-07-06 20:13:10 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:13:10.853012 | orchestrator | 2025-07-06 20:13:10 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:10.853086 | orchestrator | 2025-07-06 20:13:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:13.896214 | orchestrator | 2025-07-06 20:13:13 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:13.897381 | orchestrator | 2025-07-06 20:13:13 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:13:13.899926 | orchestrator | 2025-07-06 20:13:13 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:13.899969 | orchestrator | 2025-07-06 20:13:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:16.945922 | orchestrator | 2025-07-06 20:13:16 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:16.948091 | orchestrator | 2025-07-06 20:13:16 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state STARTED 2025-07-06 20:13:16.950582 | orchestrator | 2025-07-06 20:13:16 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:16.950629 | orchestrator | 2025-07-06 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:19.990956 | orchestrator | 2025-07-06 20:13:19 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state STARTED 2025-07-06 20:13:19.995927 | orchestrator | 2025-07-06 20:13:19.995998 | orchestrator | 2025-07-06 20:13:19.996011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:13:19.996024 | orchestrator | 2025-07-06 20:13:19.996035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:13:19.996046 | orchestrator | Sunday 06 July 2025 20:10:12 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-07-06 20:13:19.996057 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:19.996069 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:19.996104 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:19.996119 | orchestrator | 2025-07-06 20:13:19.996137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:13:19.996155 | orchestrator | Sunday 06 July 2025 20:10:12 +0000 (0:00:00.284) 0:00:00.537 *********** 2025-07-06 20:13:19.996172 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-06 20:13:19.996190 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-06 20:13:19.996209 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-06 20:13:19.996228 | orchestrator | 2025-07-06 20:13:19.996267 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-06 20:13:19.996288 | orchestrator | 2025-07-06 20:13:19.996307 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:13:19.996364 | orchestrator | Sunday 06 July 2025 20:10:12 +0000 (0:00:00.407) 0:00:00.944 *********** 2025-07-06 20:13:19.996376 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:13:19.996388 | orchestrator | 2025-07-06 20:13:19.996399 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-06 20:13:19.996410 | orchestrator | Sunday 06 July 2025 20:10:13 +0000 (0:00:00.479) 0:00:01.423 *********** 2025-07-06 20:13:19.996421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 20:13:19.996431 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 20:13:19.996465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 20:13:19.996477 | orchestrator | 2025-07-06 20:13:19.996488 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-06 20:13:19.996498 | orchestrator | Sunday 06 July 2025 20:10:13 +0000 (0:00:00.689) 0:00:02.112 *********** 2025-07-06 20:13:19.996515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.996533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.996565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.996588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.996614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.996629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.996643 | orchestrator | 2025-07-06 20:13:19.996655 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:13:19.996668 | orchestrator | Sunday 06 July 2025 20:10:15 +0000 (0:00:01.628) 0:00:03.741 *********** 2025-07-06 20:13:19.996685 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:13:19.996697 | orchestrator | 2025-07-06 20:13:19.996710 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-06 20:13:19.996723 | orchestrator | Sunday 06 July 2025 20:10:16 +0000 (0:00:00.508) 0:00:04.249 *********** 2025-07-06 20:13:19.996744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.996763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.996784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.996799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.996820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.996840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.996859 | orchestrator | 2025-07-06 20:13:19.996871 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-06 20:13:19.996882 | orchestrator | Sunday 06 July 2025 20:10:18 +0000 (0:00:02.473) 0:00:06.722 *********** 2025-07-06 20:13:19.996893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:13:19.996905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:13:19.996917 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:19.996936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:13:19.996953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:13:19.996971 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:19.996983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:13:19.996995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:13:19.997006 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:19.997017 | orchestrator | 2025-07-06 20:13:19.997028 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-06 20:13:19.997039 | orchestrator | Sunday 06 July 2025 20:10:19 +0000 (0:00:01.250) 0:00:07.973 *********** 2025-07-06 20:13:19.997057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:13:19.997074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:13:19.997093 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:19.997105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:13:19.997117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:13:19.997129 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:19.997146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:13:19.997163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:13:19.997191 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:19.997211 | orchestrator | 2025-07-06 20:13:19.997233 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-06 20:13:19.997254 | orchestrator | Sunday 06 July 2025 20:10:20 +0000 (0:00:01.006) 0:00:08.979 *********** 2025-07-06 20:13:19.997275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.997294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.997306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.997362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.997385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.997398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.997410 | orchestrator | 2025-07-06 20:13:19.997421 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-06 20:13:19.997433 | orchestrator | Sunday 06 July 2025 20:10:23 +0000 (0:00:02.286) 0:00:11.266 *********** 2025-07-06 20:13:19.997444 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:19.997455 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:19.997465 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:19.997476 | orchestrator | 2025-07-06 20:13:19.997487 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-06 20:13:19.997498 | orchestrator | Sunday 06 July 2025 20:10:26 +0000 (0:00:03.021) 0:00:14.288 *********** 2025-07-06 20:13:19.997509 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:19.997519 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:19.997530 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:19.997551 | orchestrator | 2025-07-06 20:13:19.997562 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-06 20:13:19.997574 | orchestrator | Sunday 06 July 2025 20:10:27 +0000 (0:00:01.750) 0:00:16.038 *********** 2025-07-06 20:13:19.997592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-07-06 20:13:19 | INFO  | Task c688e50d-e570-45bb-8ea1-3788f914144b is in state SUCCESS 2025-07-06 20:13:19.997607 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.997626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.997638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:13:19.997650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.997677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.997695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:13:19.997707 | orchestrator | 2025-07-06 20:13:19.997718 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:13:19.997729 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:02.459) 0:00:18.498 *********** 2025-07-06 20:13:19.997740 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:19.997751 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:19.997762 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:19.997773 | orchestrator | 2025-07-06 20:13:19.997784 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-06 20:13:19.997795 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:00.296) 0:00:18.795 *********** 2025-07-06 20:13:19.997806 | orchestrator | 2025-07-06 20:13:19.997817 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-06 20:13:19.997827 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:00.062) 0:00:18.857 *********** 2025-07-06 20:13:19.997838 | orchestrator | 2025-07-06 20:13:19.997861 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-06 20:13:19.997872 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:00.063) 0:00:18.921 *********** 2025-07-06 20:13:19.997883 | orchestrator | 2025-07-06 20:13:19.997894 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-06 20:13:19.997904 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.268) 0:00:19.189 *********** 2025-07-06 20:13:19.997915 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:19.997926 | orchestrator | 2025-07-06 20:13:19.997937 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-06 20:13:19.997948 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.271) 0:00:19.460 *********** 2025-07-06 20:13:19.997958 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:19.997976 | orchestrator | 2025-07-06 20:13:19.997987 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-06 20:13:19.997998 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.211) 0:00:19.671 *********** 2025-07-06 20:13:19.998009 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:19.998083 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:19.998095 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:19.998106 | orchestrator | 2025-07-06 20:13:19.998116 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-06 20:13:19.998127 | orchestrator | Sunday 06 July 2025 20:11:48 +0000 (0:01:16.743) 0:01:36.414 *********** 2025-07-06 20:13:19.998138 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:19.998149 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:19.998160 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:19.998170 | orchestrator | 2025-07-06 20:13:19.998181 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:13:19.998192 | orchestrator | Sunday 06 July 2025 20:13:07 +0000 (0:01:19.591) 0:02:56.006 *********** 2025-07-06 20:13:19.998203 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:13:19.998216 | orchestrator | 2025-07-06 20:13:19.998235 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-06 20:13:19.998254 | orchestrator | Sunday 06 July 2025 20:13:08 +0000 (0:00:00.677) 0:02:56.683 *********** 2025-07-06 20:13:19.998273 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:19.998293 | orchestrator | 2025-07-06 20:13:19.998312 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-06 20:13:19.998498 | orchestrator | Sunday 06 July 2025 20:13:10 +0000 (0:00:02.409) 0:02:59.093 *********** 2025-07-06 20:13:19.998512 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:19.998523 | orchestrator | 2025-07-06 20:13:19.998534 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-06 20:13:19.998544 | orchestrator | Sunday 06 July 2025 20:13:13 +0000 (0:00:02.400) 0:03:01.493 *********** 2025-07-06 20:13:19.998555 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:19.998566 | orchestrator | 2025-07-06 20:13:19.998577 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-06 20:13:19.998602 | orchestrator | Sunday 06 July 2025 20:13:16 +0000 (0:00:02.850) 0:03:04.344 *********** 2025-07-06 20:13:19.998614 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:19.998625 | orchestrator | 2025-07-06 20:13:19.998636 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:13:19.998648 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:13:19.998661 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:13:19.998672 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:13:19.998682 | orchestrator | 2025-07-06 20:13:19.998693 | orchestrator | 2025-07-06 20:13:19.998712 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:13:19.998724 | orchestrator | Sunday 06 July 2025 20:13:18 +0000 (0:00:02.567) 0:03:06.911 *********** 2025-07-06 20:13:19.998734 | orchestrator | =============================================================================== 2025-07-06 20:13:19.998745 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.59s 2025-07-06 20:13:19.998756 | orchestrator | opensearch : Restart opensearch container ------------------------------ 76.74s 2025-07-06 20:13:19.998766 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.02s 2025-07-06 20:13:19.998777 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.85s 2025-07-06 20:13:19.998788 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2025-07-06 20:13:19.998810 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.47s 2025-07-06 20:13:19.998821 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.46s 2025-07-06 20:13:19.998832 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.41s 2025-07-06 20:13:19.998843 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.40s 2025-07-06 20:13:19.998853 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.29s 2025-07-06 20:13:19.998864 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.75s 2025-07-06 20:13:19.998875 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.63s 2025-07-06 20:13:19.998886 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.25s 2025-07-06 20:13:19.998895 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.01s 2025-07-06 20:13:19.998902 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2025-07-06 20:13:19.998910 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-07-06 20:13:19.998918 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-07-06 20:13:19.998926 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-07-06 20:13:19.998933 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-07-06 20:13:19.998941 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.39s 2025-07-06 20:13:19.998949 | orchestrator | 2025-07-06 20:13:19 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:19.998957 | orchestrator | 2025-07-06 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:23.047626 | orchestrator | 2025-07-06 20:13:23.047723 | orchestrator | 2025-07-06 20:13:23.047737 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-06 20:13:23.047750 | orchestrator | 2025-07-06 20:13:23.047761 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-06 20:13:23.047773 | orchestrator | Sunday 06 July 2025 20:10:12 +0000 (0:00:00.102) 0:00:00.102 *********** 2025-07-06 20:13:23.047784 | orchestrator | ok: [localhost] => { 2025-07-06 20:13:23.047798 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-06 20:13:23.047809 | orchestrator | } 2025-07-06 20:13:23.047821 | orchestrator | 2025-07-06 20:13:23.047832 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-06 20:13:23.047844 | orchestrator | Sunday 06 July 2025 20:10:12 +0000 (0:00:00.052) 0:00:00.155 *********** 2025-07-06 20:13:23.047855 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-06 20:13:23.047868 | orchestrator | ...ignoring 2025-07-06 20:13:23.047880 | orchestrator | 2025-07-06 20:13:23.047892 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-06 20:13:23.047903 | orchestrator | Sunday 06 July 2025 20:10:14 +0000 (0:00:02.809) 0:00:02.964 *********** 2025-07-06 20:13:23.047914 | orchestrator | skipping: [localhost] 2025-07-06 20:13:23.048058 | orchestrator | 2025-07-06 20:13:23.048077 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-06 20:13:23.048384 | orchestrator | Sunday 06 July 2025 20:10:14 +0000 (0:00:00.052) 0:00:03.016 *********** 2025-07-06 20:13:23.048399 | orchestrator | ok: [localhost] 2025-07-06 20:13:23.048410 | orchestrator | 2025-07-06 20:13:23.048421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:13:23.048432 | orchestrator | 2025-07-06 20:13:23.048443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:13:23.048454 | orchestrator | Sunday 06 July 2025 20:10:15 +0000 (0:00:00.146) 0:00:03.163 *********** 2025-07-06 20:13:23.048489 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.048501 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.048512 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.048523 | orchestrator | 2025-07-06 20:13:23.048534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:13:23.048545 | orchestrator | Sunday 06 July 2025 20:10:15 +0000 (0:00:00.309) 0:00:03.472 *********** 2025-07-06 20:13:23.048556 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-06 20:13:23.048567 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-06 20:13:23.048578 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-06 20:13:23.048589 | orchestrator | 2025-07-06 20:13:23.048671 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-06 20:13:23.048683 | orchestrator | 2025-07-06 20:13:23.048694 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-06 20:13:23.048705 | orchestrator | Sunday 06 July 2025 20:10:16 +0000 (0:00:00.690) 0:00:04.163 *********** 2025-07-06 20:13:23.048729 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:13:23.048741 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-06 20:13:23.048752 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-06 20:13:23.048763 | orchestrator | 2025-07-06 20:13:23.048774 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:13:23.048785 | orchestrator | Sunday 06 July 2025 20:10:16 +0000 (0:00:00.366) 0:00:04.530 *********** 2025-07-06 20:13:23.048796 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:13:23.048807 | orchestrator | 2025-07-06 20:13:23.048818 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-06 20:13:23.048829 | orchestrator | Sunday 06 July 2025 20:10:17 +0000 (0:00:00.517) 0:00:05.048 *********** 2025-07-06 20:13:23.048864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.048887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.048910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.048923 | orchestrator | 2025-07-06 20:13:23.048943 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-06 20:13:23.048955 | orchestrator | Sunday 06 July 2025 20:10:19 +0000 (0:00:02.836) 0:00:07.884 *********** 2025-07-06 20:13:23.048966 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.048977 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.048988 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.048999 | orchestrator | 2025-07-06 20:13:23.049010 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-06 20:13:23.049028 | orchestrator | Sunday 06 July 2025 20:10:20 +0000 (0:00:00.745) 0:00:08.629 *********** 2025-07-06 20:13:23.049039 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.049050 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.049064 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.049083 | orchestrator | 2025-07-06 20:13:23.049102 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-06 20:13:23.049120 | orchestrator | Sunday 06 July 2025 20:10:21 +0000 (0:00:01.378) 0:00:10.008 *********** 2025-07-06 20:13:23.049149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.049182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.049216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.049229 | orchestrator | 2025-07-06 20:13:23.049240 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-06 20:13:23.049251 | orchestrator | Sunday 06 July 2025 20:10:25 +0000 (0:00:03.702) 0:00:13.710 *********** 2025-07-06 20:13:23.049261 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.049272 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.049283 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.049294 | orchestrator | 2025-07-06 20:13:23.049306 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-06 20:13:23.049360 | orchestrator | Sunday 06 July 2025 20:10:26 +0000 (0:00:01.223) 0:00:14.933 *********** 2025-07-06 20:13:23.049373 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.049386 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:23.049399 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:23.049411 | orchestrator | 2025-07-06 20:13:23.049424 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:13:23.049437 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:04.313) 0:00:19.246 *********** 2025-07-06 20:13:23.049450 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:13:23.049463 | orchestrator | 2025-07-06 20:13:23.049475 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-06 20:13:23.049489 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.520) 0:00:19.767 *********** 2025-07-06 20:13:23.049512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049534 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.049553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049568 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.049589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049612 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.049624 | orchestrator | 2025-07-06 20:13:23.049637 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-06 20:13:23.049650 | orchestrator | Sunday 06 July 2025 20:10:35 +0000 (0:00:03.807) 0:00:23.575 *********** 2025-07-06 20:13:23.049667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049680 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.049699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049718 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.049735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049747 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.049759 | orchestrator | 2025-07-06 20:13:23.049770 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-06 20:13:23.049781 | orchestrator | Sunday 06 July 2025 20:10:37 +0000 (0:00:02.424) 0:00:26.000 *********** 2025-07-06 20:13:23.049799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049817 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.049830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049842 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.049858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:13:23.049880 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.049891 | orchestrator | 2025-07-06 20:13:23.049902 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-06 20:13:23.049913 | orchestrator | Sunday 06 July 2025 20:10:40 +0000 (0:00:02.510) 0:00:28.510 *********** 2025-07-06 20:13:23.049931 | orchestrator | ch2025-07-06 20:13:23 | INFO  | Task e6022b34-6896-458f-82ba-fac89a81ec83 is in state SUCCESS 2025-07-06 20:13:23.049943 | orchestrator | anged: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.049961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.049991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:13:23.050004 | orchestrator | 2025-07-06 20:13:23.050068 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-06 20:13:23.050083 | orchestrator | Sunday 06 July 2025 20:10:43 +0000 (0:00:03.286) 0:00:31.797 *********** 2025-07-06 20:13:23.050094 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.050105 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:23.050116 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:23.050132 | orchestrator | 2025-07-06 20:13:23.050151 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-06 20:13:23.050172 | orchestrator | Sunday 06 July 2025 20:10:44 +0000 (0:00:01.114) 0:00:32.912 *********** 2025-07-06 20:13:23.050193 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.050212 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.050232 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.050250 | orchestrator | 2025-07-06 20:13:23.050268 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-06 20:13:23.050279 | orchestrator | Sunday 06 July 2025 20:10:45 +0000 (0:00:00.340) 0:00:33.252 *********** 2025-07-06 20:13:23.050299 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.050310 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.050541 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.050552 | orchestrator | 2025-07-06 20:13:23.050562 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-06 20:13:23.050572 | orchestrator | Sunday 06 July 2025 20:10:45 +0000 (0:00:00.308) 0:00:33.561 *********** 2025-07-06 20:13:23.050583 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-06 20:13:23.050594 | orchestrator | ...ignoring 2025-07-06 20:13:23.050605 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-06 20:13:23.050615 | orchestrator | ...ignoring 2025-07-06 20:13:23.050625 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-06 20:13:23.050635 | orchestrator | ...ignoring 2025-07-06 20:13:23.050644 | orchestrator | 2025-07-06 20:13:23.050655 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-06 20:13:23.050664 | orchestrator | Sunday 06 July 2025 20:10:56 +0000 (0:00:10.884) 0:00:44.445 *********** 2025-07-06 20:13:23.050674 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.050684 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.050693 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.050703 | orchestrator | 2025-07-06 20:13:23.050713 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-06 20:13:23.050723 | orchestrator | Sunday 06 July 2025 20:10:57 +0000 (0:00:00.931) 0:00:45.377 *********** 2025-07-06 20:13:23.050732 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.050742 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.050752 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.050762 | orchestrator | 2025-07-06 20:13:23.050772 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-06 20:13:23.050781 | orchestrator | Sunday 06 July 2025 20:10:57 +0000 (0:00:00.571) 0:00:45.949 *********** 2025-07-06 20:13:23.050791 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.050801 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.050810 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.050820 | orchestrator | 2025-07-06 20:13:23.050830 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-06 20:13:23.050840 | orchestrator | Sunday 06 July 2025 20:10:58 +0000 (0:00:00.486) 0:00:46.436 *********** 2025-07-06 20:13:23.050850 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.050860 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.050878 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.050889 | orchestrator | 2025-07-06 20:13:23.050899 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-06 20:13:23.050908 | orchestrator | Sunday 06 July 2025 20:10:58 +0000 (0:00:00.424) 0:00:46.860 *********** 2025-07-06 20:13:23.050918 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.050928 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.050938 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.050948 | orchestrator | 2025-07-06 20:13:23.050958 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-06 20:13:23.050968 | orchestrator | Sunday 06 July 2025 20:10:59 +0000 (0:00:00.710) 0:00:47.571 *********** 2025-07-06 20:13:23.050977 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.050987 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.050997 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.051007 | orchestrator | 2025-07-06 20:13:23.051016 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:13:23.051026 | orchestrator | Sunday 06 July 2025 20:10:59 +0000 (0:00:00.395) 0:00:47.966 *********** 2025-07-06 20:13:23.051049 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.051059 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.051068 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-06 20:13:23.051078 | orchestrator | 2025-07-06 20:13:23.051088 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-06 20:13:23.051098 | orchestrator | Sunday 06 July 2025 20:11:00 +0000 (0:00:00.371) 0:00:48.338 *********** 2025-07-06 20:13:23.051107 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.051117 | orchestrator | 2025-07-06 20:13:23.051127 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-06 20:13:23.051136 | orchestrator | Sunday 06 July 2025 20:11:11 +0000 (0:00:11.447) 0:00:59.785 *********** 2025-07-06 20:13:23.051146 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.051156 | orchestrator | 2025-07-06 20:13:23.051170 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:13:23.051187 | orchestrator | Sunday 06 July 2025 20:11:11 +0000 (0:00:00.111) 0:00:59.897 *********** 2025-07-06 20:13:23.051204 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.051222 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.051241 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.051259 | orchestrator | 2025-07-06 20:13:23.051277 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-06 20:13:23.051295 | orchestrator | Sunday 06 July 2025 20:11:12 +0000 (0:00:00.980) 0:01:00.878 *********** 2025-07-06 20:13:23.051307 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.051343 | orchestrator | 2025-07-06 20:13:23.051354 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-06 20:13:23.051366 | orchestrator | Sunday 06 July 2025 20:11:20 +0000 (0:00:07.534) 0:01:08.413 *********** 2025-07-06 20:13:23.051377 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.051388 | orchestrator | 2025-07-06 20:13:23.051400 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-06 20:13:23.051417 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:01.534) 0:01:09.948 *********** 2025-07-06 20:13:23.051429 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.051440 | orchestrator | 2025-07-06 20:13:23.051453 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-06 20:13:23.051469 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:02.431) 0:01:12.379 *********** 2025-07-06 20:13:23.051486 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.051502 | orchestrator | 2025-07-06 20:13:23.051518 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-06 20:13:23.051534 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:00.132) 0:01:12.511 *********** 2025-07-06 20:13:23.051548 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.051558 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.051567 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.051577 | orchestrator | 2025-07-06 20:13:23.051587 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-06 20:13:23.051596 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:00.490) 0:01:13.002 *********** 2025-07-06 20:13:23.051606 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.051616 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-06 20:13:23.051625 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:23.051635 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:23.051645 | orchestrator | 2025-07-06 20:13:23.051654 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-06 20:13:23.051664 | orchestrator | skipping: no hosts matched 2025-07-06 20:13:23.051673 | orchestrator | 2025-07-06 20:13:23.051683 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-06 20:13:23.051693 | orchestrator | 2025-07-06 20:13:23.051702 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-06 20:13:23.051712 | orchestrator | Sunday 06 July 2025 20:11:25 +0000 (0:00:00.321) 0:01:13.324 *********** 2025-07-06 20:13:23.051729 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:13:23.051738 | orchestrator | 2025-07-06 20:13:23.051748 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-06 20:13:23.051758 | orchestrator | Sunday 06 July 2025 20:11:43 +0000 (0:00:18.540) 0:01:31.865 *********** 2025-07-06 20:13:23.051767 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.051777 | orchestrator | 2025-07-06 20:13:23.051787 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-06 20:13:23.051797 | orchestrator | Sunday 06 July 2025 20:12:04 +0000 (0:00:20.609) 0:01:52.475 *********** 2025-07-06 20:13:23.051806 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.051816 | orchestrator | 2025-07-06 20:13:23.051826 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-06 20:13:23.051835 | orchestrator | 2025-07-06 20:13:23.051845 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-06 20:13:23.051855 | orchestrator | Sunday 06 July 2025 20:12:06 +0000 (0:00:02.539) 0:01:55.014 *********** 2025-07-06 20:13:23.051865 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:13:23.051875 | orchestrator | 2025-07-06 20:13:23.051892 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-06 20:13:23.051903 | orchestrator | Sunday 06 July 2025 20:12:30 +0000 (0:00:23.442) 0:02:18.457 *********** 2025-07-06 20:13:23.051912 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.051922 | orchestrator | 2025-07-06 20:13:23.051932 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-06 20:13:23.051941 | orchestrator | Sunday 06 July 2025 20:12:45 +0000 (0:00:15.506) 0:02:33.963 *********** 2025-07-06 20:13:23.051951 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.051961 | orchestrator | 2025-07-06 20:13:23.051971 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-06 20:13:23.051980 | orchestrator | 2025-07-06 20:13:23.051990 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-06 20:13:23.052000 | orchestrator | Sunday 06 July 2025 20:12:48 +0000 (0:00:02.690) 0:02:36.653 *********** 2025-07-06 20:13:23.052009 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.052019 | orchestrator | 2025-07-06 20:13:23.052029 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-06 20:13:23.052039 | orchestrator | Sunday 06 July 2025 20:13:04 +0000 (0:00:16.241) 0:02:52.895 *********** 2025-07-06 20:13:23.052048 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.052058 | orchestrator | 2025-07-06 20:13:23.052067 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-06 20:13:23.052077 | orchestrator | Sunday 06 July 2025 20:13:05 +0000 (0:00:00.661) 0:02:53.556 *********** 2025-07-06 20:13:23.052087 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.052097 | orchestrator | 2025-07-06 20:13:23.052106 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-06 20:13:23.052116 | orchestrator | 2025-07-06 20:13:23.052126 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-06 20:13:23.052136 | orchestrator | Sunday 06 July 2025 20:13:07 +0000 (0:00:02.398) 0:02:55.955 *********** 2025-07-06 20:13:23.052145 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:13:23.052155 | orchestrator | 2025-07-06 20:13:23.052165 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-06 20:13:23.052174 | orchestrator | Sunday 06 July 2025 20:13:08 +0000 (0:00:00.493) 0:02:56.448 *********** 2025-07-06 20:13:23.052184 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.052194 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.052203 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.052215 | orchestrator | 2025-07-06 20:13:23.052232 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-06 20:13:23.052248 | orchestrator | Sunday 06 July 2025 20:13:10 +0000 (0:00:02.431) 0:02:58.879 *********** 2025-07-06 20:13:23.052276 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.052289 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.052299 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.052309 | orchestrator | 2025-07-06 20:13:23.052341 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-06 20:13:23.052352 | orchestrator | Sunday 06 July 2025 20:13:12 +0000 (0:00:02.096) 0:03:00.976 *********** 2025-07-06 20:13:23.052367 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.052377 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.052387 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.052397 | orchestrator | 2025-07-06 20:13:23.052406 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-06 20:13:23.052416 | orchestrator | Sunday 06 July 2025 20:13:15 +0000 (0:00:02.455) 0:03:03.431 *********** 2025-07-06 20:13:23.052426 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.052435 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.052445 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:13:23.052455 | orchestrator | 2025-07-06 20:13:23.052464 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-06 20:13:23.052474 | orchestrator | Sunday 06 July 2025 20:13:17 +0000 (0:00:02.308) 0:03:05.740 *********** 2025-07-06 20:13:23.052484 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:13:23.052493 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:13:23.052503 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:13:23.052513 | orchestrator | 2025-07-06 20:13:23.052523 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-06 20:13:23.052532 | orchestrator | Sunday 06 July 2025 20:13:20 +0000 (0:00:02.887) 0:03:08.627 *********** 2025-07-06 20:13:23.052542 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:13:23.052552 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:13:23.052561 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:13:23.052571 | orchestrator | 2025-07-06 20:13:23.052581 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:13:23.052590 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-06 20:13:23.052601 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-06 20:13:23.052612 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-06 20:13:23.052622 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-06 20:13:23.052632 | orchestrator | 2025-07-06 20:13:23.052642 | orchestrator | 2025-07-06 20:13:23.052651 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:13:23.052661 | orchestrator | Sunday 06 July 2025 20:13:20 +0000 (0:00:00.213) 0:03:08.841 *********** 2025-07-06 20:13:23.052671 | orchestrator | =============================================================================== 2025-07-06 20:13:23.052681 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.98s 2025-07-06 20:13:23.052698 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.12s 2025-07-06 20:13:23.052708 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.24s 2025-07-06 20:13:23.052717 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.45s 2025-07-06 20:13:23.052727 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2025-07-06 20:13:23.052737 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.53s 2025-07-06 20:13:23.052746 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.23s 2025-07-06 20:13:23.052764 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.31s 2025-07-06 20:13:23.052774 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.81s 2025-07-06 20:13:23.052784 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.70s 2025-07-06 20:13:23.052794 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.29s 2025-07-06 20:13:23.052803 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.89s 2025-07-06 20:13:23.052813 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.84s 2025-07-06 20:13:23.052823 | orchestrator | Check MariaDB service --------------------------------------------------- 2.81s 2025-07-06 20:13:23.052832 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.51s 2025-07-06 20:13:23.052842 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.46s 2025-07-06 20:13:23.052851 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.43s 2025-07-06 20:13:23.052861 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.43s 2025-07-06 20:13:23.052870 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.43s 2025-07-06 20:13:23.052880 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.40s 2025-07-06 20:13:23.052889 | orchestrator | 2025-07-06 20:13:23 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:23.052899 | orchestrator | 2025-07-06 20:13:23 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:23.052909 | orchestrator | 2025-07-06 20:13:23 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:23.052919 | orchestrator | 2025-07-06 20:13:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:26.094872 | orchestrator | 2025-07-06 20:13:26 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:26.094985 | orchestrator | 2025-07-06 20:13:26 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:26.095002 | orchestrator | 2025-07-06 20:13:26 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:26.095014 | orchestrator | 2025-07-06 20:13:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:29.131980 | orchestrator | 2025-07-06 20:13:29 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:29.132514 | orchestrator | 2025-07-06 20:13:29 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:29.133631 | orchestrator | 2025-07-06 20:13:29 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:29.133655 | orchestrator | 2025-07-06 20:13:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:32.185652 | orchestrator | 2025-07-06 20:13:32 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:32.188110 | orchestrator | 2025-07-06 20:13:32 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:32.189569 | orchestrator | 2025-07-06 20:13:32 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:32.189608 | orchestrator | 2025-07-06 20:13:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:35.242294 | orchestrator | 2025-07-06 20:13:35 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:35.243152 | orchestrator | 2025-07-06 20:13:35 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:35.244027 | orchestrator | 2025-07-06 20:13:35 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:35.244089 | orchestrator | 2025-07-06 20:13:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:38.278461 | orchestrator | 2025-07-06 20:13:38 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:38.278566 | orchestrator | 2025-07-06 20:13:38 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:38.278587 | orchestrator | 2025-07-06 20:13:38 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:38.278600 | orchestrator | 2025-07-06 20:13:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:41.328273 | orchestrator | 2025-07-06 20:13:41 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:41.329585 | orchestrator | 2025-07-06 20:13:41 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:41.332626 | orchestrator | 2025-07-06 20:13:41 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:41.333030 | orchestrator | 2025-07-06 20:13:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:44.379992 | orchestrator | 2025-07-06 20:13:44 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:44.382609 | orchestrator | 2025-07-06 20:13:44 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:44.383217 | orchestrator | 2025-07-06 20:13:44 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:44.383243 | orchestrator | 2025-07-06 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:47.424745 | orchestrator | 2025-07-06 20:13:47 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:47.427741 | orchestrator | 2025-07-06 20:13:47 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:47.429511 | orchestrator | 2025-07-06 20:13:47 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:47.429541 | orchestrator | 2025-07-06 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:50.471798 | orchestrator | 2025-07-06 20:13:50 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:50.472512 | orchestrator | 2025-07-06 20:13:50 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:50.473513 | orchestrator | 2025-07-06 20:13:50 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:50.473590 | orchestrator | 2025-07-06 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:53.526508 | orchestrator | 2025-07-06 20:13:53 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:53.526623 | orchestrator | 2025-07-06 20:13:53 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:53.529664 | orchestrator | 2025-07-06 20:13:53 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:53.529792 | orchestrator | 2025-07-06 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:56.571581 | orchestrator | 2025-07-06 20:13:56 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:56.573123 | orchestrator | 2025-07-06 20:13:56 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:56.574672 | orchestrator | 2025-07-06 20:13:56 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:56.574749 | orchestrator | 2025-07-06 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:59.618067 | orchestrator | 2025-07-06 20:13:59 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:13:59.623139 | orchestrator | 2025-07-06 20:13:59 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:13:59.624664 | orchestrator | 2025-07-06 20:13:59 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:13:59.626681 | orchestrator | 2025-07-06 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:02.675560 | orchestrator | 2025-07-06 20:14:02 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:14:02.678802 | orchestrator | 2025-07-06 20:14:02 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:02.678875 | orchestrator | 2025-07-06 20:14:02 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:02.678888 | orchestrator | 2025-07-06 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:05.731976 | orchestrator | 2025-07-06 20:14:05 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:14:05.733152 | orchestrator | 2025-07-06 20:14:05 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:05.734977 | orchestrator | 2025-07-06 20:14:05 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:05.735381 | orchestrator | 2025-07-06 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:08.778778 | orchestrator | 2025-07-06 20:14:08 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:14:08.780682 | orchestrator | 2025-07-06 20:14:08 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:08.782483 | orchestrator | 2025-07-06 20:14:08 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:08.782514 | orchestrator | 2025-07-06 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:11.826334 | orchestrator | 2025-07-06 20:14:11 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state STARTED 2025-07-06 20:14:11.830639 | orchestrator | 2025-07-06 20:14:11 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:11.832153 | orchestrator | 2025-07-06 20:14:11 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:11.832193 | orchestrator | 2025-07-06 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:14.879896 | orchestrator | 2025-07-06 20:14:14 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:14.886686 | orchestrator | 2025-07-06 20:14:14.886751 | orchestrator | 2025-07-06 20:14:14.886765 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-06 20:14:14.886777 | orchestrator | 2025-07-06 20:14:14.886789 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-06 20:14:14.886801 | orchestrator | Sunday 06 July 2025 20:12:02 +0000 (0:00:00.585) 0:00:00.585 *********** 2025-07-06 20:14:14.886813 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:14:14.886826 | orchestrator | 2025-07-06 20:14:14.886837 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-06 20:14:14.886848 | orchestrator | Sunday 06 July 2025 20:12:03 +0000 (0:00:00.609) 0:00:01.194 *********** 2025-07-06 20:14:14.887455 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887480 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.887492 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.887530 | orchestrator | 2025-07-06 20:14:14.887543 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-06 20:14:14.887554 | orchestrator | Sunday 06 July 2025 20:12:04 +0000 (0:00:00.678) 0:00:01.873 *********** 2025-07-06 20:14:14.887565 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887576 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.887601 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.887612 | orchestrator | 2025-07-06 20:14:14.887637 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-06 20:14:14.887648 | orchestrator | Sunday 06 July 2025 20:12:04 +0000 (0:00:00.285) 0:00:02.159 *********** 2025-07-06 20:14:14.887659 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887670 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.887680 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.887691 | orchestrator | 2025-07-06 20:14:14.887702 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-06 20:14:14.887713 | orchestrator | Sunday 06 July 2025 20:12:05 +0000 (0:00:00.824) 0:00:02.984 *********** 2025-07-06 20:14:14.887724 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887735 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.887745 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.887756 | orchestrator | 2025-07-06 20:14:14.887767 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-06 20:14:14.887778 | orchestrator | Sunday 06 July 2025 20:12:05 +0000 (0:00:00.294) 0:00:03.279 *********** 2025-07-06 20:14:14.887788 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887799 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.887810 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.887820 | orchestrator | 2025-07-06 20:14:14.887831 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-06 20:14:14.887842 | orchestrator | Sunday 06 July 2025 20:12:05 +0000 (0:00:00.291) 0:00:03.570 *********** 2025-07-06 20:14:14.887853 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887864 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.887874 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.887885 | orchestrator | 2025-07-06 20:14:14.887896 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-06 20:14:14.887907 | orchestrator | Sunday 06 July 2025 20:12:06 +0000 (0:00:00.299) 0:00:03.870 *********** 2025-07-06 20:14:14.887918 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.887930 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.887940 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.887951 | orchestrator | 2025-07-06 20:14:14.887962 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-06 20:14:14.887972 | orchestrator | Sunday 06 July 2025 20:12:06 +0000 (0:00:00.461) 0:00:04.332 *********** 2025-07-06 20:14:14.887983 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.887994 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.888005 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.888015 | orchestrator | 2025-07-06 20:14:14.888026 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-06 20:14:14.888037 | orchestrator | Sunday 06 July 2025 20:12:07 +0000 (0:00:00.299) 0:00:04.631 *********** 2025-07-06 20:14:14.888048 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:14:14.888059 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:14:14.888070 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:14:14.888081 | orchestrator | 2025-07-06 20:14:14.888092 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-06 20:14:14.888104 | orchestrator | Sunday 06 July 2025 20:12:07 +0000 (0:00:00.619) 0:00:05.250 *********** 2025-07-06 20:14:14.888117 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.888129 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.888141 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.888161 | orchestrator | 2025-07-06 20:14:14.888175 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-06 20:14:14.888187 | orchestrator | Sunday 06 July 2025 20:12:08 +0000 (0:00:00.413) 0:00:05.664 *********** 2025-07-06 20:14:14.888199 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:14:14.888211 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:14:14.888224 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:14:14.888236 | orchestrator | 2025-07-06 20:14:14.888248 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-06 20:14:14.888261 | orchestrator | Sunday 06 July 2025 20:12:10 +0000 (0:00:02.162) 0:00:07.826 *********** 2025-07-06 20:14:14.888274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:14:14.888350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:14:14.888363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:14:14.888374 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.888385 | orchestrator | 2025-07-06 20:14:14.888397 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-06 20:14:14.888460 | orchestrator | Sunday 06 July 2025 20:12:10 +0000 (0:00:00.414) 0:00:08.241 *********** 2025-07-06 20:14:14.888477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.888493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.888504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.888516 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.888527 | orchestrator | 2025-07-06 20:14:14.888538 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-06 20:14:14.888555 | orchestrator | Sunday 06 July 2025 20:12:11 +0000 (0:00:00.838) 0:00:09.079 *********** 2025-07-06 20:14:14.888569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.888583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.888595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.888606 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.888617 | orchestrator | 2025-07-06 20:14:14.888628 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-06 20:14:14.888647 | orchestrator | Sunday 06 July 2025 20:12:11 +0000 (0:00:00.152) 0:00:09.232 *********** 2025-07-06 20:14:14.888661 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c814e8933a2f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-06 20:12:08.718140', 'end': '2025-07-06 20:12:08.760447', 'delta': '0:00:00.042307', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c814e8933a2f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-06 20:14:14.888676 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '225801eb6695', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-06 20:12:09.463239', 'end': '2025-07-06 20:12:09.511484', 'delta': '0:00:00.048245', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['225801eb6695'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-06 20:14:14.888721 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3d014757d1c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-06 20:12:10.031052', 'end': '2025-07-06 20:12:10.066403', 'delta': '0:00:00.035351', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3d014757d1c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-06 20:14:14.888736 | orchestrator | 2025-07-06 20:14:14.888747 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-06 20:14:14.888758 | orchestrator | Sunday 06 July 2025 20:12:11 +0000 (0:00:00.350) 0:00:09.583 *********** 2025-07-06 20:14:14.888769 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.888780 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.888791 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.888802 | orchestrator | 2025-07-06 20:14:14.888818 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-06 20:14:14.888829 | orchestrator | Sunday 06 July 2025 20:12:12 +0000 (0:00:00.446) 0:00:10.030 *********** 2025-07-06 20:14:14.888840 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-06 20:14:14.888851 | orchestrator | 2025-07-06 20:14:14.888862 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-06 20:14:14.888873 | orchestrator | Sunday 06 July 2025 20:12:14 +0000 (0:00:01.993) 0:00:12.023 *********** 2025-07-06 20:14:14.888884 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.888895 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.888906 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889017 | orchestrator | 2025-07-06 20:14:14.889035 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-06 20:14:14.889046 | orchestrator | Sunday 06 July 2025 20:12:14 +0000 (0:00:00.272) 0:00:12.295 *********** 2025-07-06 20:14:14.889057 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889076 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889087 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889098 | orchestrator | 2025-07-06 20:14:14.889109 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:14:14.889120 | orchestrator | Sunday 06 July 2025 20:12:15 +0000 (0:00:00.409) 0:00:12.705 *********** 2025-07-06 20:14:14.889131 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889141 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889152 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889163 | orchestrator | 2025-07-06 20:14:14.889223 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-06 20:14:14.889237 | orchestrator | Sunday 06 July 2025 20:12:15 +0000 (0:00:00.466) 0:00:13.172 *********** 2025-07-06 20:14:14.889248 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.889259 | orchestrator | 2025-07-06 20:14:14.889269 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-06 20:14:14.889342 | orchestrator | Sunday 06 July 2025 20:12:15 +0000 (0:00:00.133) 0:00:13.305 *********** 2025-07-06 20:14:14.889354 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889365 | orchestrator | 2025-07-06 20:14:14.889376 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:14:14.889387 | orchestrator | Sunday 06 July 2025 20:12:15 +0000 (0:00:00.229) 0:00:13.535 *********** 2025-07-06 20:14:14.889398 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889409 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889420 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889430 | orchestrator | 2025-07-06 20:14:14.889441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-06 20:14:14.889452 | orchestrator | Sunday 06 July 2025 20:12:16 +0000 (0:00:00.292) 0:00:13.828 *********** 2025-07-06 20:14:14.889463 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889474 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889484 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889495 | orchestrator | 2025-07-06 20:14:14.889506 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-06 20:14:14.889517 | orchestrator | Sunday 06 July 2025 20:12:16 +0000 (0:00:00.306) 0:00:14.135 *********** 2025-07-06 20:14:14.889528 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889538 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889549 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889560 | orchestrator | 2025-07-06 20:14:14.889570 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-06 20:14:14.889582 | orchestrator | Sunday 06 July 2025 20:12:17 +0000 (0:00:00.481) 0:00:14.617 *********** 2025-07-06 20:14:14.889592 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889603 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889614 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889625 | orchestrator | 2025-07-06 20:14:14.889635 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-06 20:14:14.889646 | orchestrator | Sunday 06 July 2025 20:12:17 +0000 (0:00:00.321) 0:00:14.938 *********** 2025-07-06 20:14:14.889657 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889668 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889679 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889690 | orchestrator | 2025-07-06 20:14:14.889700 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-06 20:14:14.889711 | orchestrator | Sunday 06 July 2025 20:12:17 +0000 (0:00:00.307) 0:00:15.245 *********** 2025-07-06 20:14:14.889722 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889733 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889744 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889754 | orchestrator | 2025-07-06 20:14:14.889765 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-06 20:14:14.889813 | orchestrator | Sunday 06 July 2025 20:12:17 +0000 (0:00:00.304) 0:00:15.550 *********** 2025-07-06 20:14:14.889836 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.889847 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.889858 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.889870 | orchestrator | 2025-07-06 20:14:14.889882 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-06 20:14:14.889895 | orchestrator | Sunday 06 July 2025 20:12:18 +0000 (0:00:00.479) 0:00:16.029 *********** 2025-07-06 20:14:14.889915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3', 'dm-uuid-LVM-d7HjWU3JzXeSeQbjfc2n9Yi9OGiYQHwxPT90GOkoOAxFv9UtUw1qQalfE6UDZoVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.889930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa', 'dm-uuid-LVM-8M7FNHYgTDJ9A4eglNQUhos7W2WwexO36l6gjlXuHie3wt9U7ZPzJLjsWM5gLxG0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.889943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.889957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.889970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.889983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.889996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23', 'dm-uuid-LVM-QfX16kVcdVYnqzdCOCVjaqNxpgP4soHxJE8lczAYT7NweX7RTBI5cncey0TFLr60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xm31dv-gyCR-GRcW-qog6-fURB-MiId-z72Sxq', 'scsi-0QEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960', 'scsi-SQEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-61Xthh-rbJ0-B71E-GmRZ-6SSd-Wz1L-h1cJu7', 'scsi-0QEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d', 'scsi-SQEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d', 'dm-uuid-LVM-CA1Wfim9SpDpxBtKo1BwTB5y8rmoIm3RXYW2SxLOg9CT7NfGhrhf8NOuQriXg0QO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1', 'scsi-SQEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890489 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.890500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-53lk1J-mHQQ-paPR-nldo-PB2W-6kAU-0TGfMM', 'scsi-0QEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c', 'scsi-SQEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1eTAzL-LZpg-Kw21-QsDF-KF5N-hqpe-hB04d2', 'scsi-0QEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2', 'scsi-SQEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155', 'scsi-SQEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890621 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.890633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4', 'dm-uuid-LVM-I5ATjPgkR63NkWUiDD1bjVOQFzhFfRUcotxcS8zflvAYkHLilg6Wke1DJ5epgIrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df', 'dm-uuid-LVM-bdIz1aaEKdbNyRiBnwwOSbuQhj8IhhO6l6FvchNFMc6smPYfiWBRhLZKf4KLrJzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:14:14.890794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yx5XFo-M4DJ-bRrP-qvbI-GdzE-w8dn-bShrLr', 'scsi-0QEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7', 'scsi-SQEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kg4Teq-63G8-2Kkl-7gz1-J35t-HfCp-q0Kknc', 'scsi-0QEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b', 'scsi-SQEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c', 'scsi-SQEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:14:14.890875 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.890886 | orchestrator | 2025-07-06 20:14:14.890897 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-06 20:14:14.890909 | orchestrator | Sunday 06 July 2025 20:12:18 +0000 (0:00:00.549) 0:00:16.578 *********** 2025-07-06 20:14:14.890926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3', 'dm-uuid-LVM-d7HjWU3JzXeSeQbjfc2n9Yi9OGiYQHwxPT90GOkoOAxFv9UtUw1qQalfE6UDZoVk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.890939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa', 'dm-uuid-LVM-8M7FNHYgTDJ9A4eglNQUhos7W2WwexO36l6gjlXuHie3wt9U7ZPzJLjsWM5gLxG0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.890951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.890969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.890981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891030 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16', 'scsi-SQEMU_QEMU_HARDDISK_32940bce-9d30-4ec6-9fea-d63c9095158b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5b3ebdad--89cb--5093--adb4--41e3a34848e3-osd--block--5b3ebdad--89cb--5093--adb4--41e3a34848e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xm31dv-gyCR-GRcW-qog6-fURB-MiId-z72Sxq', 'scsi-0QEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960', 'scsi-SQEMU_QEMU_HARDDISK_901e3f2c-f061-4105-8266-58d4d98b5960'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67620618--3322--5703--9264--076cb24f91fa-osd--block--67620618--3322--5703--9264--076cb24f91fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-61Xthh-rbJ0-B71E-GmRZ-6SSd-Wz1L-h1cJu7', 'scsi-0QEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d', 'scsi-SQEMU_QEMU_HARDDISK_46febb03-7465-44d2-9b41-dd661ec3aa7d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23', 'dm-uuid-LVM-QfX16kVcdVYnqzdCOCVjaqNxpgP4soHxJE8lczAYT7NweX7RTBI5cncey0TFLr60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1', 'scsi-SQEMU_QEMU_HARDDISK_ad2af1d2-0168-4556-9317-4e4f08581fa1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d', 'dm-uuid-LVM-CA1Wfim9SpDpxBtKo1BwTB5y8rmoIm3RXYW2SxLOg9CT7NfGhrhf8NOuQriXg0QO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891217 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.891229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891241 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891337 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4', 'dm-uuid-LVM-I5ATjPgkR63NkWUiDD1bjVOQFzhFfRUcotxcS8zflvAYkHLilg6Wke1DJ5epgIrF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df', 'dm-uuid-LVM-bdIz1aaEKdbNyRiBnwwOSbuQhj8IhhO6l6FvchNFMc6smPYfiWBRhLZKf4KLrJzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891431 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_01ded91f-df62-4447-a733-0e6b15acbb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6b2ac7c1--b26c--557b--8077--56c3cb59db23-osd--block--6b2ac7c1--b26c--557b--8077--56c3cb59db23'], 'host': 'SCSI storage controller: Red Ha2025-07-06 20:14:14 | INFO  | Task 73699ecd-1146-4de4-b6de-d162a749e622 is in state SUCCESS 2025-07-06 20:14:14.891513 | orchestrator | t, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-53lk1J-mHQQ-paPR-nldo-PB2W-6kAU-0TGfMM', 'scsi-0QEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c', 'scsi-SQEMU_QEMU_HARDDISK_95e38168-1e77-4099-bfde-ad7249670c4c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d-osd--block--e81f0ba1--e76a--5ac2--85fd--9d5b359e204d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1eTAzL-LZpg-Kw21-QsDF-KF5N-hqpe-hB04d2', 'scsi-0QEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2', 'scsi-SQEMU_QEMU_HARDDISK_951512cc-5411-4e34-a1bc-779e76dbc3d2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155', 'scsi-SQEMU_QEMU_HARDDISK_6eb6290b-216e-4753-9f37-507fd8d1c155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891618 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.891629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a360e1e-d618-4e64-9063-d6a563856280-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4472ae94--c442--5fee--95ac--d2e3b3e55ca4-osd--block--4472ae94--c442--5fee--95ac--d2e3b3e55ca4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yx5XFo-M4DJ-bRrP-qvbI-GdzE-w8dn-bShrLr', 'scsi-0QEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7', 'scsi-SQEMU_QEMU_HARDDISK_d394e861-9c48-44bd-b1dc-9e2695f6f7e7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c6cf71a--fa39--576b--8a24--237c163534df-osd--block--8c6cf71a--fa39--576b--8a24--237c163534df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kg4Teq-63G8-2Kkl-7gz1-J35t-HfCp-q0Kknc', 'scsi-0QEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b', 'scsi-SQEMU_QEMU_HARDDISK_ee53a9be-d7f6-4740-ab76-379edf2c3c5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c', 'scsi-SQEMU_QEMU_HARDDISK_825fbe01-1f52-40fd-870f-6965feac768c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:14:14.891759 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.891770 | orchestrator | 2025-07-06 20:14:14.891781 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-06 20:14:14.891792 | orchestrator | Sunday 06 July 2025 20:12:19 +0000 (0:00:00.558) 0:00:17.137 *********** 2025-07-06 20:14:14.891803 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.891814 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.891825 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.891836 | orchestrator | 2025-07-06 20:14:14.891847 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-06 20:14:14.891865 | orchestrator | Sunday 06 July 2025 20:12:20 +0000 (0:00:00.660) 0:00:17.797 *********** 2025-07-06 20:14:14.891876 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.891892 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.891903 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.891914 | orchestrator | 2025-07-06 20:14:14.891925 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:14:14.891936 | orchestrator | Sunday 06 July 2025 20:12:20 +0000 (0:00:00.439) 0:00:18.236 *********** 2025-07-06 20:14:14.891947 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.891958 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.891969 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.891980 | orchestrator | 2025-07-06 20:14:14.891990 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:14:14.892001 | orchestrator | Sunday 06 July 2025 20:12:21 +0000 (0:00:00.707) 0:00:18.943 *********** 2025-07-06 20:14:14.892012 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892023 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892034 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892045 | orchestrator | 2025-07-06 20:14:14.892056 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:14:14.892067 | orchestrator | Sunday 06 July 2025 20:12:21 +0000 (0:00:00.288) 0:00:19.232 *********** 2025-07-06 20:14:14.892077 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892088 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892099 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892110 | orchestrator | 2025-07-06 20:14:14.892121 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:14:14.892132 | orchestrator | Sunday 06 July 2025 20:12:22 +0000 (0:00:00.398) 0:00:19.630 *********** 2025-07-06 20:14:14.892143 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892154 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892165 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892175 | orchestrator | 2025-07-06 20:14:14.892186 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-06 20:14:14.892197 | orchestrator | Sunday 06 July 2025 20:12:22 +0000 (0:00:00.483) 0:00:20.113 *********** 2025-07-06 20:14:14.892208 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-06 20:14:14.892219 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-06 20:14:14.892229 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-06 20:14:14.892240 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-06 20:14:14.892251 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-06 20:14:14.892262 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-06 20:14:14.892273 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-06 20:14:14.892314 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-06 20:14:14.892325 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-06 20:14:14.892336 | orchestrator | 2025-07-06 20:14:14.892347 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-06 20:14:14.892358 | orchestrator | Sunday 06 July 2025 20:12:23 +0000 (0:00:00.821) 0:00:20.934 *********** 2025-07-06 20:14:14.892369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:14:14.892380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:14:14.892391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:14:14.892401 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892412 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-06 20:14:14.892423 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-06 20:14:14.892434 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-06 20:14:14.892444 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-06 20:14:14.892473 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-06 20:14:14.892484 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-06 20:14:14.892495 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892506 | orchestrator | 2025-07-06 20:14:14.892517 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-06 20:14:14.892527 | orchestrator | Sunday 06 July 2025 20:12:23 +0000 (0:00:00.349) 0:00:21.284 *********** 2025-07-06 20:14:14.892539 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:14:14.892550 | orchestrator | 2025-07-06 20:14:14.892561 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-06 20:14:14.892572 | orchestrator | Sunday 06 July 2025 20:12:24 +0000 (0:00:00.687) 0:00:21.972 *********** 2025-07-06 20:14:14.892589 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892601 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892611 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892622 | orchestrator | 2025-07-06 20:14:14.892633 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-06 20:14:14.892644 | orchestrator | Sunday 06 July 2025 20:12:24 +0000 (0:00:00.305) 0:00:22.277 *********** 2025-07-06 20:14:14.892655 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892666 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892677 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892687 | orchestrator | 2025-07-06 20:14:14.892698 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-06 20:14:14.892709 | orchestrator | Sunday 06 July 2025 20:12:24 +0000 (0:00:00.281) 0:00:22.558 *********** 2025-07-06 20:14:14.892720 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892731 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.892742 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:14:14.892753 | orchestrator | 2025-07-06 20:14:14.892764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-06 20:14:14.892775 | orchestrator | Sunday 06 July 2025 20:12:25 +0000 (0:00:00.308) 0:00:22.867 *********** 2025-07-06 20:14:14.892786 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.892797 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.892817 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.892828 | orchestrator | 2025-07-06 20:14:14.892839 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-06 20:14:14.892850 | orchestrator | Sunday 06 July 2025 20:12:25 +0000 (0:00:00.567) 0:00:23.434 *********** 2025-07-06 20:14:14.892861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:14:14.892872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:14:14.892882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:14:14.892893 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892904 | orchestrator | 2025-07-06 20:14:14.892915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-06 20:14:14.892926 | orchestrator | Sunday 06 July 2025 20:12:26 +0000 (0:00:00.392) 0:00:23.827 *********** 2025-07-06 20:14:14.892937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:14:14.892947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:14:14.892958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:14:14.892969 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.892980 | orchestrator | 2025-07-06 20:14:14.892990 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-06 20:14:14.893001 | orchestrator | Sunday 06 July 2025 20:12:26 +0000 (0:00:00.368) 0:00:24.196 *********** 2025-07-06 20:14:14.893012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:14:14.893032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:14:14.893043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:14:14.893054 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.893065 | orchestrator | 2025-07-06 20:14:14.893076 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-06 20:14:14.893087 | orchestrator | Sunday 06 July 2025 20:12:26 +0000 (0:00:00.360) 0:00:24.556 *********** 2025-07-06 20:14:14.893097 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:14:14.893108 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:14:14.893119 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:14:14.893130 | orchestrator | 2025-07-06 20:14:14.893141 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-06 20:14:14.893152 | orchestrator | Sunday 06 July 2025 20:12:27 +0000 (0:00:00.302) 0:00:24.859 *********** 2025-07-06 20:14:14.893162 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:14:14.893173 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:14:14.893184 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-06 20:14:14.893195 | orchestrator | 2025-07-06 20:14:14.893206 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-06 20:14:14.893217 | orchestrator | Sunday 06 July 2025 20:12:27 +0000 (0:00:00.523) 0:00:25.382 *********** 2025-07-06 20:14:14.893228 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:14:14.893238 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:14:14.893249 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:14:14.893260 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:14:14.893271 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:14:14.893329 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:14:14.893341 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:14:14.893351 | orchestrator | 2025-07-06 20:14:14.893362 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-06 20:14:14.893373 | orchestrator | Sunday 06 July 2025 20:12:28 +0000 (0:00:00.944) 0:00:26.327 *********** 2025-07-06 20:14:14.893384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:14:14.893395 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:14:14.893405 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:14:14.893416 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:14:14.893427 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:14:14.893438 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:14:14.893455 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:14:14.893466 | orchestrator | 2025-07-06 20:14:14.893477 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-06 20:14:14.893488 | orchestrator | Sunday 06 July 2025 20:12:30 +0000 (0:00:01.931) 0:00:28.259 *********** 2025-07-06 20:14:14.893499 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:14:14.893510 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:14:14.893521 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-06 20:14:14.893532 | orchestrator | 2025-07-06 20:14:14.893543 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-06 20:14:14.893553 | orchestrator | Sunday 06 July 2025 20:12:31 +0000 (0:00:00.379) 0:00:28.638 *********** 2025-07-06 20:14:14.893566 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:14:14.893590 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:14:14.893602 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:14:14.893614 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:14:14.893625 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:14:14.893636 | orchestrator | 2025-07-06 20:14:14.893647 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-06 20:14:14.893658 | orchestrator | Sunday 06 July 2025 20:13:17 +0000 (0:00:46.833) 0:01:15.472 *********** 2025-07-06 20:14:14.893668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893690 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893701 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893721 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893731 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-06 20:14:14.893740 | orchestrator | 2025-07-06 20:14:14.893750 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-06 20:14:14.893759 | orchestrator | Sunday 06 July 2025 20:13:42 +0000 (0:00:24.289) 0:01:39.761 *********** 2025-07-06 20:14:14.893769 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893788 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893798 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893807 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893817 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893826 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:14:14.893836 | orchestrator | 2025-07-06 20:14:14.893846 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-06 20:14:14.893855 | orchestrator | Sunday 06 July 2025 20:13:54 +0000 (0:00:12.085) 0:01:51.847 *********** 2025-07-06 20:14:14.893865 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893874 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:14:14.893884 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:14:14.893901 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893916 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:14:14.893926 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:14:14.893936 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893946 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:14:14.893956 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:14:14.893965 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.893975 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:14:14.893985 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:14:14.893994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.894004 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:14:14.894014 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:14:14.894087 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:14:14.894098 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:14:14.894107 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:14:14.894117 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-06 20:14:14.894127 | orchestrator | 2025-07-06 20:14:14.894136 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:14:14.894146 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-06 20:14:14.894157 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-06 20:14:14.894167 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-06 20:14:14.894177 | orchestrator | 2025-07-06 20:14:14.894186 | orchestrator | 2025-07-06 20:14:14.894196 | orchestrator | 2025-07-06 20:14:14.894206 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:14:14.894216 | orchestrator | Sunday 06 July 2025 20:14:12 +0000 (0:00:17.989) 0:02:09.836 *********** 2025-07-06 20:14:14.894225 | orchestrator | =============================================================================== 2025-07-06 20:14:14.894236 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.83s 2025-07-06 20:14:14.894245 | orchestrator | generate keys ---------------------------------------------------------- 24.29s 2025-07-06 20:14:14.894255 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.99s 2025-07-06 20:14:14.894265 | orchestrator | get keys from monitors ------------------------------------------------- 12.09s 2025-07-06 20:14:14.894274 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.16s 2025-07-06 20:14:14.894302 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.99s 2025-07-06 20:14:14.894312 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.93s 2025-07-06 20:14:14.894322 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.94s 2025-07-06 20:14:14.894332 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2025-07-06 20:14:14.894341 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.82s 2025-07-06 20:14:14.894351 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2025-07-06 20:14:14.894369 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.71s 2025-07-06 20:14:14.894379 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.69s 2025-07-06 20:14:14.894389 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-07-06 20:14:14.894399 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-07-06 20:14:14.894409 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-07-06 20:14:14.894418 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2025-07-06 20:14:14.894428 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.57s 2025-07-06 20:14:14.894438 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.56s 2025-07-06 20:14:14.894447 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2025-07-06 20:14:14.894457 | orchestrator | 2025-07-06 20:14:14 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:14.894467 | orchestrator | 2025-07-06 20:14:14 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:14.894477 | orchestrator | 2025-07-06 20:14:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:17.940885 | orchestrator | 2025-07-06 20:14:17 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:17.942878 | orchestrator | 2025-07-06 20:14:17 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:17.944900 | orchestrator | 2025-07-06 20:14:17 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:17.945143 | orchestrator | 2025-07-06 20:14:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:20.982773 | orchestrator | 2025-07-06 20:14:20 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:20.984249 | orchestrator | 2025-07-06 20:14:20 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:20.985540 | orchestrator | 2025-07-06 20:14:20 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:20.985590 | orchestrator | 2025-07-06 20:14:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:24.030242 | orchestrator | 2025-07-06 20:14:24 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:24.030396 | orchestrator | 2025-07-06 20:14:24 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:24.030680 | orchestrator | 2025-07-06 20:14:24 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:24.030799 | orchestrator | 2025-07-06 20:14:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:27.084677 | orchestrator | 2025-07-06 20:14:27 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:27.085477 | orchestrator | 2025-07-06 20:14:27 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:27.087346 | orchestrator | 2025-07-06 20:14:27 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:27.087503 | orchestrator | 2025-07-06 20:14:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:30.127516 | orchestrator | 2025-07-06 20:14:30 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:30.128675 | orchestrator | 2025-07-06 20:14:30 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:30.131349 | orchestrator | 2025-07-06 20:14:30 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:30.132330 | orchestrator | 2025-07-06 20:14:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:33.174236 | orchestrator | 2025-07-06 20:14:33 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:33.175370 | orchestrator | 2025-07-06 20:14:33 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:33.176327 | orchestrator | 2025-07-06 20:14:33 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:33.176479 | orchestrator | 2025-07-06 20:14:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:36.219405 | orchestrator | 2025-07-06 20:14:36 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:36.221120 | orchestrator | 2025-07-06 20:14:36 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:36.222242 | orchestrator | 2025-07-06 20:14:36 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:36.222294 | orchestrator | 2025-07-06 20:14:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:39.281236 | orchestrator | 2025-07-06 20:14:39 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:39.281442 | orchestrator | 2025-07-06 20:14:39 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:39.282479 | orchestrator | 2025-07-06 20:14:39 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:39.282526 | orchestrator | 2025-07-06 20:14:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:42.319049 | orchestrator | 2025-07-06 20:14:42 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state STARTED 2025-07-06 20:14:42.319737 | orchestrator | 2025-07-06 20:14:42 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:42.321327 | orchestrator | 2025-07-06 20:14:42 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:42.321368 | orchestrator | 2025-07-06 20:14:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:45.374802 | orchestrator | 2025-07-06 20:14:45 | INFO  | Task aac0c9ee-49e7-4e20-9e4b-a22aa30969f6 is in state SUCCESS 2025-07-06 20:14:45.376179 | orchestrator | 2025-07-06 20:14:45 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:14:45.378503 | orchestrator | 2025-07-06 20:14:45 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:45.380480 | orchestrator | 2025-07-06 20:14:45 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:45.380621 | orchestrator | 2025-07-06 20:14:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:48.432324 | orchestrator | 2025-07-06 20:14:48 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:14:48.433924 | orchestrator | 2025-07-06 20:14:48 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:48.435869 | orchestrator | 2025-07-06 20:14:48 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:48.436424 | orchestrator | 2025-07-06 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:51.486196 | orchestrator | 2025-07-06 20:14:51 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:14:51.488228 | orchestrator | 2025-07-06 20:14:51 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:51.491436 | orchestrator | 2025-07-06 20:14:51 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:51.491994 | orchestrator | 2025-07-06 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:54.536395 | orchestrator | 2025-07-06 20:14:54 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:14:54.538578 | orchestrator | 2025-07-06 20:14:54 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:54.539985 | orchestrator | 2025-07-06 20:14:54 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:54.540012 | orchestrator | 2025-07-06 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:57.578928 | orchestrator | 2025-07-06 20:14:57 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:14:57.580200 | orchestrator | 2025-07-06 20:14:57 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:14:57.582312 | orchestrator | 2025-07-06 20:14:57 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:14:57.582766 | orchestrator | 2025-07-06 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:00.616520 | orchestrator | 2025-07-06 20:15:00 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:00.619139 | orchestrator | 2025-07-06 20:15:00 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:00.620562 | orchestrator | 2025-07-06 20:15:00 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:15:00.620632 | orchestrator | 2025-07-06 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:03.654003 | orchestrator | 2025-07-06 20:15:03 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:03.654158 | orchestrator | 2025-07-06 20:15:03 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:03.655335 | orchestrator | 2025-07-06 20:15:03 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:15:03.655361 | orchestrator | 2025-07-06 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:06.702479 | orchestrator | 2025-07-06 20:15:06 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:06.704024 | orchestrator | 2025-07-06 20:15:06 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:06.705922 | orchestrator | 2025-07-06 20:15:06 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state STARTED 2025-07-06 20:15:06.705948 | orchestrator | 2025-07-06 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:09.754347 | orchestrator | 2025-07-06 20:15:09 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:09.756625 | orchestrator | 2025-07-06 20:15:09 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:09.764989 | orchestrator | 2025-07-06 20:15:09 | INFO  | Task 17704e73-67a4-4643-a91e-ca82ab6ea67f is in state SUCCESS 2025-07-06 20:15:09.765067 | orchestrator | 2025-07-06 20:15:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:09.766714 | orchestrator | 2025-07-06 20:15:09.766775 | orchestrator | 2025-07-06 20:15:09.766797 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-06 20:15:09.766816 | orchestrator | 2025-07-06 20:15:09.766835 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-06 20:15:09.766855 | orchestrator | Sunday 06 July 2025 20:14:18 +0000 (0:00:00.116) 0:00:00.116 *********** 2025-07-06 20:15:09.766912 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-06 20:15:09.766935 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767271 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767291 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:15:09.767303 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767314 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-06 20:15:09.767325 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-06 20:15:09.767352 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-06 20:15:09.767364 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-06 20:15:09.767375 | orchestrator | 2025-07-06 20:15:09.767386 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-06 20:15:09.767397 | orchestrator | Sunday 06 July 2025 20:14:22 +0000 (0:00:04.062) 0:00:04.178 *********** 2025-07-06 20:15:09.767409 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-06 20:15:09.767420 | orchestrator | 2025-07-06 20:15:09.767431 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-06 20:15:09.767442 | orchestrator | Sunday 06 July 2025 20:14:23 +0000 (0:00:00.944) 0:00:05.123 *********** 2025-07-06 20:15:09.767453 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-06 20:15:09.767465 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767476 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767487 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:15:09.767498 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767509 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-06 20:15:09.767520 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-06 20:15:09.767530 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-06 20:15:09.767541 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-06 20:15:09.767552 | orchestrator | 2025-07-06 20:15:09.767563 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-06 20:15:09.767574 | orchestrator | Sunday 06 July 2025 20:14:35 +0000 (0:00:12.808) 0:00:17.931 *********** 2025-07-06 20:15:09.767586 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-06 20:15:09.767597 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767607 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767618 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:15:09.767629 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-06 20:15:09.767640 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-06 20:15:09.767651 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-06 20:15:09.767661 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-06 20:15:09.767672 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-06 20:15:09.767683 | orchestrator | 2025-07-06 20:15:09.767708 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:15:09.767719 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:15:09.767731 | orchestrator | 2025-07-06 20:15:09.767742 | orchestrator | 2025-07-06 20:15:09.767753 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:15:09.767764 | orchestrator | Sunday 06 July 2025 20:14:42 +0000 (0:00:06.362) 0:00:24.294 *********** 2025-07-06 20:15:09.767775 | orchestrator | =============================================================================== 2025-07-06 20:15:09.767786 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.81s 2025-07-06 20:15:09.767797 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.36s 2025-07-06 20:15:09.767808 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.06s 2025-07-06 20:15:09.767818 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2025-07-06 20:15:09.767829 | orchestrator | 2025-07-06 20:15:09.767840 | orchestrator | 2025-07-06 20:15:09.767851 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:15:09.767862 | orchestrator | 2025-07-06 20:15:09.767889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:15:09.767902 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-07-06 20:15:09.767915 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.767928 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.767941 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.767955 | orchestrator | 2025-07-06 20:15:09.767968 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:15:09.767981 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.250) 0:00:00.496 *********** 2025-07-06 20:15:09.767994 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-06 20:15:09.768007 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-06 20:15:09.768019 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-06 20:15:09.768032 | orchestrator | 2025-07-06 20:15:09.768045 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-06 20:15:09.768058 | orchestrator | 2025-07-06 20:15:09.768071 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:15:09.768084 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.329) 0:00:00.826 *********** 2025-07-06 20:15:09.768095 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:09.768106 | orchestrator | 2025-07-06 20:15:09.768122 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-06 20:15:09.768133 | orchestrator | Sunday 06 July 2025 20:13:26 +0000 (0:00:00.440) 0:00:01.267 *********** 2025-07-06 20:15:09.768150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.768194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.768210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.768229 | orchestrator | 2025-07-06 20:15:09.768260 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-06 20:15:09.768272 | orchestrator | Sunday 06 July 2025 20:13:27 +0000 (0:00:00.996) 0:00:02.264 *********** 2025-07-06 20:15:09.768284 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.768295 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.768306 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.768317 | orchestrator | 2025-07-06 20:15:09.768329 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:15:09.768340 | orchestrator | Sunday 06 July 2025 20:13:27 +0000 (0:00:00.365) 0:00:02.630 *********** 2025-07-06 20:15:09.768351 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-06 20:15:09.768369 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-06 20:15:09.768380 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-06 20:15:09.768391 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-06 20:15:09.768402 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-06 20:15:09.768413 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-06 20:15:09.768424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-06 20:15:09.768435 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-06 20:15:09.768446 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-06 20:15:09.768457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-06 20:15:09.768468 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-06 20:15:09.768478 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-06 20:15:09.768489 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-06 20:15:09.768505 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-06 20:15:09.768517 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-06 20:15:09.768528 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-06 20:15:09.768539 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-06 20:15:09.768550 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-06 20:15:09.768565 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-06 20:15:09.768576 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-06 20:15:09.768587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-06 20:15:09.768598 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-06 20:15:09.768609 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-06 20:15:09.768619 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-06 20:15:09.768631 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-06 20:15:09.768644 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-06 20:15:09.768655 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-06 20:15:09.768666 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-06 20:15:09.768677 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-06 20:15:09.768688 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-06 20:15:09.768699 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-06 20:15:09.768710 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-06 20:15:09.768721 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-06 20:15:09.768732 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-06 20:15:09.768743 | orchestrator | 2025-07-06 20:15:09.768754 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.768765 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:00.625) 0:00:03.255 *********** 2025-07-06 20:15:09.768776 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.768787 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.768798 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.768809 | orchestrator | 2025-07-06 20:15:09.768820 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.768831 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:00.250) 0:00:03.506 *********** 2025-07-06 20:15:09.768842 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.768853 | orchestrator | 2025-07-06 20:15:09.768869 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.768880 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:00.121) 0:00:03.628 *********** 2025-07-06 20:15:09.768891 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.768902 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.768912 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.768923 | orchestrator | 2025-07-06 20:15:09.768934 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.768945 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:00.380) 0:00:04.009 *********** 2025-07-06 20:15:09.768964 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.768975 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.768986 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.768997 | orchestrator | 2025-07-06 20:15:09.769008 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.769019 | orchestrator | Sunday 06 July 2025 20:13:29 +0000 (0:00:00.309) 0:00:04.318 *********** 2025-07-06 20:15:09.769030 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769041 | orchestrator | 2025-07-06 20:15:09.769052 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.769063 | orchestrator | Sunday 06 July 2025 20:13:29 +0000 (0:00:00.124) 0:00:04.443 *********** 2025-07-06 20:15:09.769074 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769085 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.769095 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.769106 | orchestrator | 2025-07-06 20:15:09.769122 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.769134 | orchestrator | Sunday 06 July 2025 20:13:29 +0000 (0:00:00.329) 0:00:04.772 *********** 2025-07-06 20:15:09.769144 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.769155 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.769166 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.769177 | orchestrator | 2025-07-06 20:15:09.769188 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.769199 | orchestrator | Sunday 06 July 2025 20:13:29 +0000 (0:00:00.251) 0:00:05.023 *********** 2025-07-06 20:15:09.769210 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769221 | orchestrator | 2025-07-06 20:15:09.769232 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.769259 | orchestrator | Sunday 06 July 2025 20:13:30 +0000 (0:00:00.432) 0:00:05.456 *********** 2025-07-06 20:15:09.769271 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769282 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.769293 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.769303 | orchestrator | 2025-07-06 20:15:09.769315 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.769326 | orchestrator | Sunday 06 July 2025 20:13:30 +0000 (0:00:00.300) 0:00:05.756 *********** 2025-07-06 20:15:09.769337 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.769348 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.769358 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.769369 | orchestrator | 2025-07-06 20:15:09.769380 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.769391 | orchestrator | Sunday 06 July 2025 20:13:30 +0000 (0:00:00.284) 0:00:06.041 *********** 2025-07-06 20:15:09.769402 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769413 | orchestrator | 2025-07-06 20:15:09.769424 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.769435 | orchestrator | Sunday 06 July 2025 20:13:30 +0000 (0:00:00.140) 0:00:06.182 *********** 2025-07-06 20:15:09.769446 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769457 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.769468 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.769478 | orchestrator | 2025-07-06 20:15:09.769490 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.769501 | orchestrator | Sunday 06 July 2025 20:13:31 +0000 (0:00:00.285) 0:00:06.467 *********** 2025-07-06 20:15:09.769512 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.769522 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.769533 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.769544 | orchestrator | 2025-07-06 20:15:09.769555 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.769566 | orchestrator | Sunday 06 July 2025 20:13:31 +0000 (0:00:00.505) 0:00:06.973 *********** 2025-07-06 20:15:09.769585 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769596 | orchestrator | 2025-07-06 20:15:09.769607 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.769617 | orchestrator | Sunday 06 July 2025 20:13:31 +0000 (0:00:00.114) 0:00:07.088 *********** 2025-07-06 20:15:09.769628 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769639 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.769650 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.769661 | orchestrator | 2025-07-06 20:15:09.769672 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.769682 | orchestrator | Sunday 06 July 2025 20:13:32 +0000 (0:00:00.288) 0:00:07.376 *********** 2025-07-06 20:15:09.769693 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.769704 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.769715 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.769726 | orchestrator | 2025-07-06 20:15:09.769737 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.769748 | orchestrator | Sunday 06 July 2025 20:13:32 +0000 (0:00:00.292) 0:00:07.668 *********** 2025-07-06 20:15:09.769759 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769770 | orchestrator | 2025-07-06 20:15:09.769780 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.769791 | orchestrator | Sunday 06 July 2025 20:13:32 +0000 (0:00:00.114) 0:00:07.782 *********** 2025-07-06 20:15:09.769802 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769813 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.769824 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.769835 | orchestrator | 2025-07-06 20:15:09.769846 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.769857 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:00.435) 0:00:08.218 *********** 2025-07-06 20:15:09.769868 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.769885 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.769897 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.769907 | orchestrator | 2025-07-06 20:15:09.769918 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.769930 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:00.298) 0:00:08.516 *********** 2025-07-06 20:15:09.769940 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769951 | orchestrator | 2025-07-06 20:15:09.769962 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.769973 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:00.139) 0:00:08.655 *********** 2025-07-06 20:15:09.769984 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.769995 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.770006 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.770068 | orchestrator | 2025-07-06 20:15:09.770082 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.770094 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:00.287) 0:00:08.943 *********** 2025-07-06 20:15:09.770105 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.770116 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.770126 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.770137 | orchestrator | 2025-07-06 20:15:09.770149 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.770160 | orchestrator | Sunday 06 July 2025 20:13:34 +0000 (0:00:00.321) 0:00:09.264 *********** 2025-07-06 20:15:09.770176 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770187 | orchestrator | 2025-07-06 20:15:09.770198 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.770209 | orchestrator | Sunday 06 July 2025 20:13:34 +0000 (0:00:00.162) 0:00:09.427 *********** 2025-07-06 20:15:09.770220 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770231 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.770270 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.770289 | orchestrator | 2025-07-06 20:15:09.770301 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.770311 | orchestrator | Sunday 06 July 2025 20:13:34 +0000 (0:00:00.518) 0:00:09.945 *********** 2025-07-06 20:15:09.770322 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.770333 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.770344 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.770355 | orchestrator | 2025-07-06 20:15:09.770366 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.770377 | orchestrator | Sunday 06 July 2025 20:13:35 +0000 (0:00:00.322) 0:00:10.268 *********** 2025-07-06 20:15:09.770388 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770399 | orchestrator | 2025-07-06 20:15:09.770409 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.770420 | orchestrator | Sunday 06 July 2025 20:13:35 +0000 (0:00:00.119) 0:00:10.388 *********** 2025-07-06 20:15:09.770431 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770442 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.770453 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.770464 | orchestrator | 2025-07-06 20:15:09.770475 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:15:09.770486 | orchestrator | Sunday 06 July 2025 20:13:35 +0000 (0:00:00.290) 0:00:10.679 *********** 2025-07-06 20:15:09.770497 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:09.770508 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:09.770518 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:09.770529 | orchestrator | 2025-07-06 20:15:09.770540 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:15:09.770551 | orchestrator | Sunday 06 July 2025 20:13:35 +0000 (0:00:00.488) 0:00:11.167 *********** 2025-07-06 20:15:09.770562 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770573 | orchestrator | 2025-07-06 20:15:09.770584 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:15:09.770595 | orchestrator | Sunday 06 July 2025 20:13:36 +0000 (0:00:00.131) 0:00:11.299 *********** 2025-07-06 20:15:09.770606 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770617 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.770627 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.770638 | orchestrator | 2025-07-06 20:15:09.770649 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-06 20:15:09.770660 | orchestrator | Sunday 06 July 2025 20:13:36 +0000 (0:00:00.284) 0:00:11.584 *********** 2025-07-06 20:15:09.770671 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:09.770682 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:09.770692 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:09.770703 | orchestrator | 2025-07-06 20:15:09.770714 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-06 20:15:09.770726 | orchestrator | Sunday 06 July 2025 20:13:37 +0000 (0:00:01.554) 0:00:13.138 *********** 2025-07-06 20:15:09.770736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-06 20:15:09.770747 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-06 20:15:09.770758 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-06 20:15:09.770769 | orchestrator | 2025-07-06 20:15:09.770780 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-06 20:15:09.770791 | orchestrator | Sunday 06 July 2025 20:13:39 +0000 (0:00:01.848) 0:00:14.987 *********** 2025-07-06 20:15:09.770802 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-06 20:15:09.770813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-06 20:15:09.770824 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-06 20:15:09.770848 | orchestrator | 2025-07-06 20:15:09.770859 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-06 20:15:09.770877 | orchestrator | Sunday 06 July 2025 20:13:41 +0000 (0:00:02.127) 0:00:17.114 *********** 2025-07-06 20:15:09.770888 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-06 20:15:09.770900 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-06 20:15:09.770911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-06 20:15:09.770922 | orchestrator | 2025-07-06 20:15:09.770933 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-06 20:15:09.770944 | orchestrator | Sunday 06 July 2025 20:13:43 +0000 (0:00:01.582) 0:00:18.697 *********** 2025-07-06 20:15:09.770955 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.770966 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.770977 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.770988 | orchestrator | 2025-07-06 20:15:09.770999 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-06 20:15:09.771010 | orchestrator | Sunday 06 July 2025 20:13:43 +0000 (0:00:00.276) 0:00:18.973 *********** 2025-07-06 20:15:09.771020 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.771031 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.771042 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.771053 | orchestrator | 2025-07-06 20:15:09.771064 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:15:09.771080 | orchestrator | Sunday 06 July 2025 20:13:44 +0000 (0:00:00.268) 0:00:19.242 *********** 2025-07-06 20:15:09.771093 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:09.771113 | orchestrator | 2025-07-06 20:15:09.771132 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-06 20:15:09.771150 | orchestrator | Sunday 06 July 2025 20:13:44 +0000 (0:00:00.766) 0:00:20.009 *********** 2025-07-06 20:15:09.771170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.771238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.771293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.771326 | orchestrator | 2025-07-06 20:15:09.771347 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-06 20:15:09.771367 | orchestrator | Sunday 06 July 2025 20:13:46 +0000 (0:00:01.510) 0:00:21.519 *********** 2025-07-06 20:15:09.771406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:09.771420 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.771439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:09.771459 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.771477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:09.771489 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.771500 | orchestrator | 2025-07-06 20:15:09.771511 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-06 20:15:09.771523 | orchestrator | Sunday 06 July 2025 20:13:46 +0000 (0:00:00.590) 0:00:22.110 *********** 2025-07-06 20:15:09.771543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:09.771562 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.771580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:09.771598 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.771618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:09.771631 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.771646 | orchestrator | 2025-07-06 20:15:09.771658 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-06 20:15:09.771669 | orchestrator | Sunday 06 July 2025 20:13:47 +0000 (0:00:01.074) 0:00:23.184 *********** 2025-07-06 20:15:09.771681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.771721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.771735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:09.771754 | orchestrator | 2025-07-06 20:15:09.771765 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:15:09.771776 | orchestrator | Sunday 06 July 2025 20:13:49 +0000 (0:00:01.532) 0:00:24.717 *********** 2025-07-06 20:15:09.771787 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:09.771798 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:09.771809 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:09.771820 | orchestrator | 2025-07-06 20:15:09.771831 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:15:09.771843 | orchestrator | Sunday 06 July 2025 20:13:49 +0000 (0:00:00.289) 0:00:25.007 *********** 2025-07-06 20:15:09.771859 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:09.771870 | orchestrator | 2025-07-06 20:15:09.771881 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-06 20:15:09.771892 | orchestrator | Sunday 06 July 2025 20:13:50 +0000 (0:00:00.704) 0:00:25.711 *********** 2025-07-06 20:15:09.771903 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:09.771914 | orchestrator | 2025-07-06 20:15:09.771925 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-06 20:15:09.771936 | orchestrator | Sunday 06 July 2025 20:13:52 +0000 (0:00:02.336) 0:00:28.048 *********** 2025-07-06 20:15:09.771947 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:09.771958 | orchestrator | 2025-07-06 20:15:09.771969 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-06 20:15:09.771980 | orchestrator | Sunday 06 July 2025 20:13:54 +0000 (0:00:01.986) 0:00:30.035 *********** 2025-07-06 20:15:09.771990 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:09.772001 | orchestrator | 2025-07-06 20:15:09.772012 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-06 20:15:09.772023 | orchestrator | Sunday 06 July 2025 20:14:09 +0000 (0:00:14.191) 0:00:44.226 *********** 2025-07-06 20:15:09.772034 | orchestrator | 2025-07-06 20:15:09.772045 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-06 20:15:09.772056 | orchestrator | Sunday 06 July 2025 20:14:09 +0000 (0:00:00.063) 0:00:44.290 *********** 2025-07-06 20:15:09.772067 | orchestrator | 2025-07-06 20:15:09.772086 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-06 20:15:09.772097 | orchestrator | Sunday 06 July 2025 20:14:09 +0000 (0:00:00.060) 0:00:44.350 *********** 2025-07-06 20:15:09.772108 | orchestrator | 2025-07-06 20:15:09.772119 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-06 20:15:09.772130 | orchestrator | Sunday 06 July 2025 20:14:09 +0000 (0:00:00.063) 0:00:44.414 *********** 2025-07-06 20:15:09.772141 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:09.772152 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:09.772170 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:09.772181 | orchestrator | 2025-07-06 20:15:09.772191 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:15:09.772203 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-06 20:15:09.772214 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-06 20:15:09.772226 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-06 20:15:09.772237 | orchestrator | 2025-07-06 20:15:09.772417 | orchestrator | 2025-07-06 20:15:09.772437 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:15:09.772449 | orchestrator | Sunday 06 July 2025 20:15:09 +0000 (0:00:59.807) 0:01:44.221 *********** 2025-07-06 20:15:09.772459 | orchestrator | =============================================================================== 2025-07-06 20:15:09.772471 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.81s 2025-07-06 20:15:09.772482 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.19s 2025-07-06 20:15:09.772500 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.34s 2025-07-06 20:15:09.772520 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.13s 2025-07-06 20:15:09.772538 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 1.99s 2025-07-06 20:15:09.772550 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.85s 2025-07-06 20:15:09.772561 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.58s 2025-07-06 20:15:09.772572 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.55s 2025-07-06 20:15:09.772582 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.53s 2025-07-06 20:15:09.772593 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.51s 2025-07-06 20:15:09.772604 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2025-07-06 20:15:09.772615 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.00s 2025-07-06 20:15:09.772625 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-07-06 20:15:09.772636 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-07-06 20:15:09.772647 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2025-07-06 20:15:09.772658 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.59s 2025-07-06 20:15:09.772668 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-07-06 20:15:09.772679 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-07-06 20:15:09.772690 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-07-06 20:15:09.772718 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.44s 2025-07-06 20:15:12.816989 | orchestrator | 2025-07-06 20:15:12 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:12.819746 | orchestrator | 2025-07-06 20:15:12 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:12.819831 | orchestrator | 2025-07-06 20:15:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:15.866603 | orchestrator | 2025-07-06 20:15:15 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:15.868132 | orchestrator | 2025-07-06 20:15:15 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:15.868194 | orchestrator | 2025-07-06 20:15:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:18.915454 | orchestrator | 2025-07-06 20:15:18 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:18.915576 | orchestrator | 2025-07-06 20:15:18 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:18.915599 | orchestrator | 2025-07-06 20:15:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:21.957908 | orchestrator | 2025-07-06 20:15:21 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:21.961212 | orchestrator | 2025-07-06 20:15:21 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:21.961344 | orchestrator | 2025-07-06 20:15:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:25.008985 | orchestrator | 2025-07-06 20:15:25 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:25.011946 | orchestrator | 2025-07-06 20:15:25 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:25.012015 | orchestrator | 2025-07-06 20:15:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:28.055593 | orchestrator | 2025-07-06 20:15:28 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:28.056948 | orchestrator | 2025-07-06 20:15:28 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:28.057136 | orchestrator | 2025-07-06 20:15:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:31.110515 | orchestrator | 2025-07-06 20:15:31 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:31.112638 | orchestrator | 2025-07-06 20:15:31 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:31.112876 | orchestrator | 2025-07-06 20:15:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:34.156087 | orchestrator | 2025-07-06 20:15:34 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:34.158434 | orchestrator | 2025-07-06 20:15:34 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:34.158480 | orchestrator | 2025-07-06 20:15:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:37.214560 | orchestrator | 2025-07-06 20:15:37 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:37.216405 | orchestrator | 2025-07-06 20:15:37 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:37.217720 | orchestrator | 2025-07-06 20:15:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:40.256513 | orchestrator | 2025-07-06 20:15:40 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state STARTED 2025-07-06 20:15:40.256643 | orchestrator | 2025-07-06 20:15:40 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:40.256661 | orchestrator | 2025-07-06 20:15:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:43.326778 | orchestrator | 2025-07-06 20:15:43 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:15:43.331863 | orchestrator | 2025-07-06 20:15:43 | INFO  | Task a00b85b6-ebfb-4cbf-9b04-3b6b9b985275 is in state SUCCESS 2025-07-06 20:15:43.332335 | orchestrator | 2025-07-06 20:15:43 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:43.336990 | orchestrator | 2025-07-06 20:15:43 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:15:43.339281 | orchestrator | 2025-07-06 20:15:43 | INFO  | Task 11d76af2-dfed-497b-b739-8ca2decd0a83 is in state STARTED 2025-07-06 20:15:43.339533 | orchestrator | 2025-07-06 20:15:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:46.421599 | orchestrator | 2025-07-06 20:15:46 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:15:46.421704 | orchestrator | 2025-07-06 20:15:46 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:46.422932 | orchestrator | 2025-07-06 20:15:46 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:15:46.423560 | orchestrator | 2025-07-06 20:15:46 | INFO  | Task 11d76af2-dfed-497b-b739-8ca2decd0a83 is in state STARTED 2025-07-06 20:15:46.423586 | orchestrator | 2025-07-06 20:15:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:49.463799 | orchestrator | 2025-07-06 20:15:49 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:15:49.463896 | orchestrator | 2025-07-06 20:15:49 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:15:49.467499 | orchestrator | 2025-07-06 20:15:49 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:15:49.468374 | orchestrator | 2025-07-06 20:15:49 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:49.468767 | orchestrator | 2025-07-06 20:15:49 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:15:49.472262 | orchestrator | 2025-07-06 20:15:49 | INFO  | Task 11d76af2-dfed-497b-b739-8ca2decd0a83 is in state SUCCESS 2025-07-06 20:15:49.472331 | orchestrator | 2025-07-06 20:15:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:52.509387 | orchestrator | 2025-07-06 20:15:52 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:15:52.510256 | orchestrator | 2025-07-06 20:15:52 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:15:52.512389 | orchestrator | 2025-07-06 20:15:52 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:15:52.515523 | orchestrator | 2025-07-06 20:15:52 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:52.515550 | orchestrator | 2025-07-06 20:15:52 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:15:52.515560 | orchestrator | 2025-07-06 20:15:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:55.564490 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:15:55.564672 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:15:55.567200 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:15:55.569655 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:55.571319 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:15:55.571350 | orchestrator | 2025-07-06 20:15:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:58.610886 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:15:58.612798 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:15:58.614441 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:15:58.616385 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:15:58.623623 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:15:58.623683 | orchestrator | 2025-07-06 20:15:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:01.665795 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:01.666159 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:01.667066 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:01.668242 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:16:01.669124 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:01.669147 | orchestrator | 2025-07-06 20:16:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:04.709735 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:04.709832 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:04.712349 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:04.712373 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state STARTED 2025-07-06 20:16:04.713234 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:04.713274 | orchestrator | 2025-07-06 20:16:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:07.774938 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:07.775038 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:07.780166 | orchestrator | 2025-07-06 20:16:07.780350 | orchestrator | 2025-07-06 20:16:07.780372 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-06 20:16:07.780385 | orchestrator | 2025-07-06 20:16:07.780481 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-06 20:16:07.780497 | orchestrator | Sunday 06 July 2025 20:14:46 +0000 (0:00:00.226) 0:00:00.226 *********** 2025-07-06 20:16:07.780993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-06 20:16:07.781018 | orchestrator | 2025-07-06 20:16:07.781032 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-06 20:16:07.781045 | orchestrator | Sunday 06 July 2025 20:14:46 +0000 (0:00:00.228) 0:00:00.454 *********** 2025-07-06 20:16:07.781057 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-06 20:16:07.781069 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-06 20:16:07.781082 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-06 20:16:07.781094 | orchestrator | 2025-07-06 20:16:07.781107 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-06 20:16:07.781119 | orchestrator | Sunday 06 July 2025 20:14:47 +0000 (0:00:01.157) 0:00:01.612 *********** 2025-07-06 20:16:07.781131 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-06 20:16:07.781167 | orchestrator | 2025-07-06 20:16:07.781180 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-06 20:16:07.781193 | orchestrator | Sunday 06 July 2025 20:14:49 +0000 (0:00:01.130) 0:00:02.743 *********** 2025-07-06 20:16:07.781236 | orchestrator | changed: [testbed-manager] 2025-07-06 20:16:07.781247 | orchestrator | 2025-07-06 20:16:07.781258 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-06 20:16:07.781269 | orchestrator | Sunday 06 July 2025 20:14:50 +0000 (0:00:00.939) 0:00:03.682 *********** 2025-07-06 20:16:07.781280 | orchestrator | changed: [testbed-manager] 2025-07-06 20:16:07.781291 | orchestrator | 2025-07-06 20:16:07.781324 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-06 20:16:07.781335 | orchestrator | Sunday 06 July 2025 20:14:50 +0000 (0:00:00.817) 0:00:04.500 *********** 2025-07-06 20:16:07.781346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-06 20:16:07.781357 | orchestrator | ok: [testbed-manager] 2025-07-06 20:16:07.781368 | orchestrator | 2025-07-06 20:16:07.781379 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-06 20:16:07.781390 | orchestrator | Sunday 06 July 2025 20:15:31 +0000 (0:00:40.630) 0:00:45.130 *********** 2025-07-06 20:16:07.781401 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-06 20:16:07.781412 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-06 20:16:07.781423 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-06 20:16:07.781434 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-06 20:16:07.781445 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-06 20:16:07.781456 | orchestrator | 2025-07-06 20:16:07.781467 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-06 20:16:07.781478 | orchestrator | Sunday 06 July 2025 20:15:35 +0000 (0:00:03.957) 0:00:49.088 *********** 2025-07-06 20:16:07.781489 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-06 20:16:07.781500 | orchestrator | 2025-07-06 20:16:07.781511 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-06 20:16:07.781522 | orchestrator | Sunday 06 July 2025 20:15:35 +0000 (0:00:00.445) 0:00:49.534 *********** 2025-07-06 20:16:07.781532 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:16:07.781543 | orchestrator | 2025-07-06 20:16:07.781554 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-06 20:16:07.781569 | orchestrator | Sunday 06 July 2025 20:15:35 +0000 (0:00:00.120) 0:00:49.654 *********** 2025-07-06 20:16:07.781589 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:16:07.781608 | orchestrator | 2025-07-06 20:16:07.781627 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-06 20:16:07.781645 | orchestrator | Sunday 06 July 2025 20:15:36 +0000 (0:00:00.296) 0:00:49.951 *********** 2025-07-06 20:16:07.781662 | orchestrator | changed: [testbed-manager] 2025-07-06 20:16:07.781680 | orchestrator | 2025-07-06 20:16:07.781696 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-06 20:16:07.781713 | orchestrator | Sunday 06 July 2025 20:15:38 +0000 (0:00:01.985) 0:00:51.936 *********** 2025-07-06 20:16:07.781729 | orchestrator | changed: [testbed-manager] 2025-07-06 20:16:07.781746 | orchestrator | 2025-07-06 20:16:07.781763 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-06 20:16:07.781781 | orchestrator | Sunday 06 July 2025 20:15:38 +0000 (0:00:00.712) 0:00:52.649 *********** 2025-07-06 20:16:07.781799 | orchestrator | changed: [testbed-manager] 2025-07-06 20:16:07.781818 | orchestrator | 2025-07-06 20:16:07.781837 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-06 20:16:07.781855 | orchestrator | Sunday 06 July 2025 20:15:39 +0000 (0:00:00.599) 0:00:53.248 *********** 2025-07-06 20:16:07.781873 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-06 20:16:07.781891 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-06 20:16:07.781929 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-06 20:16:07.781949 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-06 20:16:07.781967 | orchestrator | 2025-07-06 20:16:07.781986 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:16:07.782004 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:16:07.782079 | orchestrator | 2025-07-06 20:16:07.782101 | orchestrator | 2025-07-06 20:16:07.782247 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:16:07.782275 | orchestrator | Sunday 06 July 2025 20:15:41 +0000 (0:00:01.533) 0:00:54.782 *********** 2025-07-06 20:16:07.782295 | orchestrator | =============================================================================== 2025-07-06 20:16:07.782314 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.63s 2025-07-06 20:16:07.782345 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.96s 2025-07-06 20:16:07.782363 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.99s 2025-07-06 20:16:07.782382 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-07-06 20:16:07.782393 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.16s 2025-07-06 20:16:07.782404 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2025-07-06 20:16:07.782415 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2025-07-06 20:16:07.782426 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2025-07-06 20:16:07.782436 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2025-07-06 20:16:07.782447 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-07-06 20:16:07.782458 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-07-06 20:16:07.782468 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-07-06 20:16:07.782479 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-07-06 20:16:07.782490 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-07-06 20:16:07.782501 | orchestrator | 2025-07-06 20:16:07.782513 | orchestrator | 2025-07-06 20:16:07.782524 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:16:07.782534 | orchestrator | 2025-07-06 20:16:07.782545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:16:07.782556 | orchestrator | Sunday 06 July 2025 20:15:45 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-07-06 20:16:07.782567 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.782578 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.782589 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.782600 | orchestrator | 2025-07-06 20:16:07.782611 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:16:07.782622 | orchestrator | Sunday 06 July 2025 20:15:45 +0000 (0:00:00.326) 0:00:00.501 *********** 2025-07-06 20:16:07.782633 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-06 20:16:07.782644 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-06 20:16:07.782655 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-06 20:16:07.782665 | orchestrator | 2025-07-06 20:16:07.782676 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-06 20:16:07.782687 | orchestrator | 2025-07-06 20:16:07.782698 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-06 20:16:07.782709 | orchestrator | Sunday 06 July 2025 20:15:46 +0000 (0:00:00.774) 0:00:01.275 *********** 2025-07-06 20:16:07.782720 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.782731 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.782741 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.782765 | orchestrator | 2025-07-06 20:16:07.782776 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:16:07.782788 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:16:07.782800 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:16:07.782811 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:16:07.782822 | orchestrator | 2025-07-06 20:16:07.782833 | orchestrator | 2025-07-06 20:16:07.782844 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:16:07.782860 | orchestrator | Sunday 06 July 2025 20:15:47 +0000 (0:00:00.762) 0:00:02.038 *********** 2025-07-06 20:16:07.782879 | orchestrator | =============================================================================== 2025-07-06 20:16:07.782896 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-07-06 20:16:07.782913 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.76s 2025-07-06 20:16:07.782931 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-07-06 20:16:07.782950 | orchestrator | 2025-07-06 20:16:07.782969 | orchestrator | 2025-07-06 20:16:07.782987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:16:07.783003 | orchestrator | 2025-07-06 20:16:07.783014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:16:07.783025 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.230) 0:00:00.230 *********** 2025-07-06 20:16:07.783036 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.783047 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.783058 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.783069 | orchestrator | 2025-07-06 20:16:07.783080 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:16:07.783091 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.212) 0:00:00.443 *********** 2025-07-06 20:16:07.783102 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-06 20:16:07.783112 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-06 20:16:07.783131 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-06 20:16:07.783158 | orchestrator | 2025-07-06 20:16:07.783180 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-06 20:16:07.783197 | orchestrator | 2025-07-06 20:16:07.783353 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:16:07.783377 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.288) 0:00:00.731 *********** 2025-07-06 20:16:07.783396 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:16:07.783408 | orchestrator | 2025-07-06 20:16:07.783426 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-06 20:16:07.783436 | orchestrator | Sunday 06 July 2025 20:13:25 +0000 (0:00:00.394) 0:00:01.125 *********** 2025-07-06 20:16:07.783452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.783478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.783490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.783535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783614 | orchestrator | 2025-07-06 20:16:07.783624 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-06 20:16:07.783634 | orchestrator | Sunday 06 July 2025 20:13:27 +0000 (0:00:01.700) 0:00:02.826 *********** 2025-07-06 20:16:07.783644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-06 20:16:07.783654 | orchestrator | 2025-07-06 20:16:07.783664 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-06 20:16:07.783673 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:00.694) 0:00:03.521 *********** 2025-07-06 20:16:07.783683 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.783693 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.783703 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.783712 | orchestrator | 2025-07-06 20:16:07.783722 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-06 20:16:07.783732 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:00.373) 0:00:03.894 *********** 2025-07-06 20:16:07.783741 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:16:07.783751 | orchestrator | 2025-07-06 20:16:07.783761 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:16:07.783795 | orchestrator | Sunday 06 July 2025 20:13:29 +0000 (0:00:00.670) 0:00:04.565 *********** 2025-07-06 20:16:07.783807 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:16:07.783817 | orchestrator | 2025-07-06 20:16:07.783827 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-06 20:16:07.783837 | orchestrator | Sunday 06 July 2025 20:13:29 +0000 (0:00:00.491) 0:00:05.056 *********** 2025-07-06 20:16:07.783852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.783870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.783882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.783893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.783975 | orchestrator | 2025-07-06 20:16:07.783985 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-06 20:16:07.783995 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:03.597) 0:00:08.654 *********** 2025-07-06 20:16:07.784015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:16:07.784042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:16:07.784064 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.784074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:16:07.784085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:16:07.784105 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.784128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:16:07.784145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:16:07.784166 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.784176 | orchestrator | 2025-07-06 20:16:07.784186 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-06 20:16:07.784196 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:00.531) 0:00:09.186 *********** 2025-07-06 20:16:07.784225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:16:07.784236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fe2025-07-06 20:16:07 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:07.784278 | orchestrator | rnet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:16:07.784289 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.784299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:16:07.784310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:16:07.784331 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.784342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:16:07.784371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:16:07.784393 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.784403 | orchestrator | 2025-07-06 20:16:07.784413 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-06 20:16:07.784423 | orchestrator | Sunday 06 July 2025 20:13:34 +0000 (0:00:00.801) 0:00:09.987 *********** 2025-07-06 20:16:07.784433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.784444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.784469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.784485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784554 | orchestrator | 2025-07-06 20:16:07.784564 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-06 20:16:07.784579 | orchestrator | Sunday 06 July 2025 20:13:38 +0000 (0:00:03.536) 0:00:13.524 *********** 2025-07-06 20:16:07.784591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.784602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.784623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.784696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.784707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.784746 | orchestrator | 2025-07-06 20:16:07.784756 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-06 20:16:07.784766 | orchestrator | Sunday 06 July 2025 20:13:42 +0000 (0:00:04.591) 0:00:18.116 *********** 2025-07-06 20:16:07.784776 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.784786 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:16:07.784795 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:16:07.784805 | orchestrator | 2025-07-06 20:16:07.784815 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-06 20:16:07.784825 | orchestrator | Sunday 06 July 2025 20:13:44 +0000 (0:00:01.437) 0:00:19.553 *********** 2025-07-06 20:16:07.784834 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.784844 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.784853 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.784863 | orchestrator | 2025-07-06 20:16:07.784873 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-06 20:16:07.784882 | orchestrator | Sunday 06 July 2025 20:13:44 +0000 (0:00:00.485) 0:00:20.039 *********** 2025-07-06 20:16:07.784892 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.784902 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.784912 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.784921 | orchestrator | 2025-07-06 20:16:07.784931 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-06 20:16:07.784940 | orchestrator | Sunday 06 July 2025 20:13:45 +0000 (0:00:00.533) 0:00:20.572 *********** 2025-07-06 20:16:07.784950 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.784960 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.784969 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.784979 | orchestrator | 2025-07-06 20:16:07.784989 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-06 20:16:07.785004 | orchestrator | Sunday 06 July 2025 20:13:45 +0000 (0:00:00.320) 0:00:20.892 *********** 2025-07-06 20:16:07.785020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.785031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.785042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.785062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.785079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.785095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:16:07.785106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.785116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.785132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.785143 | orchestrator | 2025-07-06 20:16:07.785153 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:16:07.785162 | orchestrator | Sunday 06 July 2025 20:13:47 +0000 (0:00:02.230) 0:00:23.123 *********** 2025-07-06 20:16:07.785172 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.785182 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.785192 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.785251 | orchestrator | 2025-07-06 20:16:07.785262 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-06 20:16:07.785272 | orchestrator | Sunday 06 July 2025 20:13:48 +0000 (0:00:00.367) 0:00:23.491 *********** 2025-07-06 20:16:07.785282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-06 20:16:07.785292 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-06 20:16:07.785302 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-06 20:16:07.785312 | orchestrator | 2025-07-06 20:16:07.785321 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-06 20:16:07.785331 | orchestrator | Sunday 06 July 2025 20:13:50 +0000 (0:00:01.996) 0:00:25.487 *********** 2025-07-06 20:16:07.785341 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:16:07.785351 | orchestrator | 2025-07-06 20:16:07.785361 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-06 20:16:07.785370 | orchestrator | Sunday 06 July 2025 20:13:51 +0000 (0:00:00.910) 0:00:26.398 *********** 2025-07-06 20:16:07.785380 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.785390 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.785400 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.785409 | orchestrator | 2025-07-06 20:16:07.785419 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-06 20:16:07.785435 | orchestrator | Sunday 06 July 2025 20:13:51 +0000 (0:00:00.499) 0:00:26.897 *********** 2025-07-06 20:16:07.785446 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:16:07.785456 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 20:16:07.785465 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 20:16:07.785475 | orchestrator | 2025-07-06 20:16:07.785485 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-06 20:16:07.785499 | orchestrator | Sunday 06 July 2025 20:13:52 +0000 (0:00:00.977) 0:00:27.875 *********** 2025-07-06 20:16:07.785509 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.785520 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.785529 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.785539 | orchestrator | 2025-07-06 20:16:07.785549 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-06 20:16:07.785566 | orchestrator | Sunday 06 July 2025 20:13:52 +0000 (0:00:00.274) 0:00:28.150 *********** 2025-07-06 20:16:07.785576 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-06 20:16:07.785586 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-06 20:16:07.785596 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-06 20:16:07.785606 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-06 20:16:07.785615 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-06 20:16:07.785625 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-06 20:16:07.785635 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-06 20:16:07.785645 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-06 20:16:07.785655 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-06 20:16:07.785665 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-06 20:16:07.785674 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-06 20:16:07.785684 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-06 20:16:07.785694 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-06 20:16:07.785703 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-06 20:16:07.785713 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-06 20:16:07.785723 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:16:07.785732 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:16:07.785742 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:16:07.785752 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:16:07.785762 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:16:07.785771 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:16:07.785779 | orchestrator | 2025-07-06 20:16:07.785787 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-06 20:16:07.785795 | orchestrator | Sunday 06 July 2025 20:14:01 +0000 (0:00:08.863) 0:00:37.013 *********** 2025-07-06 20:16:07.785803 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:16:07.785811 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:16:07.785819 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:16:07.785827 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:16:07.785835 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:16:07.785843 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:16:07.785851 | orchestrator | 2025-07-06 20:16:07.785859 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-06 20:16:07.785867 | orchestrator | Sunday 06 July 2025 20:14:04 +0000 (0:00:02.608) 0:00:39.621 *********** 2025-07-06 20:16:07.785891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.785901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.785912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:16:07.785927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.785942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.785974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:16:07.785990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.786004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.786061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:16:07.786072 | orchestrator | 2025-07-06 20:16:07.786080 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:16:07.786089 | orchestrator | Sunday 06 July 2025 20:14:06 +0000 (0:00:02.292) 0:00:41.913 *********** 2025-07-06 20:16:07.786096 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.786104 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.786112 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.786120 | orchestrator | 2025-07-06 20:16:07.786128 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-06 20:16:07.786136 | orchestrator | Sunday 06 July 2025 20:14:06 +0000 (0:00:00.297) 0:00:42.211 *********** 2025-07-06 20:16:07.786144 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786152 | orchestrator | 2025-07-06 20:16:07.786160 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-06 20:16:07.786168 | orchestrator | Sunday 06 July 2025 20:14:09 +0000 (0:00:02.170) 0:00:44.382 *********** 2025-07-06 20:16:07.786176 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786191 | orchestrator | 2025-07-06 20:16:07.786215 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-06 20:16:07.786230 | orchestrator | Sunday 06 July 2025 20:14:11 +0000 (0:00:02.448) 0:00:46.831 *********** 2025-07-06 20:16:07.786241 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.786249 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.786257 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.786265 | orchestrator | 2025-07-06 20:16:07.786273 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-06 20:16:07.786281 | orchestrator | Sunday 06 July 2025 20:14:12 +0000 (0:00:00.829) 0:00:47.660 *********** 2025-07-06 20:16:07.786289 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.786296 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.786304 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.786312 | orchestrator | 2025-07-06 20:16:07.786320 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-06 20:16:07.786328 | orchestrator | Sunday 06 July 2025 20:14:12 +0000 (0:00:00.310) 0:00:47.971 *********** 2025-07-06 20:16:07.786336 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.786343 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.786351 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.786359 | orchestrator | 2025-07-06 20:16:07.786367 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-06 20:16:07.786375 | orchestrator | Sunday 06 July 2025 20:14:13 +0000 (0:00:00.320) 0:00:48.291 *********** 2025-07-06 20:16:07.786382 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786390 | orchestrator | 2025-07-06 20:16:07.786398 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-06 20:16:07.786412 | orchestrator | Sunday 06 July 2025 20:14:26 +0000 (0:00:13.669) 0:01:01.961 *********** 2025-07-06 20:16:07.786420 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786428 | orchestrator | 2025-07-06 20:16:07.786436 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-06 20:16:07.786444 | orchestrator | Sunday 06 July 2025 20:14:36 +0000 (0:00:09.741) 0:01:11.702 *********** 2025-07-06 20:16:07.786452 | orchestrator | 2025-07-06 20:16:07.786464 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-06 20:16:07.786472 | orchestrator | Sunday 06 July 2025 20:14:36 +0000 (0:00:00.246) 0:01:11.948 *********** 2025-07-06 20:16:07.786480 | orchestrator | 2025-07-06 20:16:07.786488 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-06 20:16:07.786496 | orchestrator | Sunday 06 July 2025 20:14:36 +0000 (0:00:00.063) 0:01:12.012 *********** 2025-07-06 20:16:07.786504 | orchestrator | 2025-07-06 20:16:07.786512 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-06 20:16:07.786520 | orchestrator | Sunday 06 July 2025 20:14:36 +0000 (0:00:00.072) 0:01:12.085 *********** 2025-07-06 20:16:07.786527 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786535 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:16:07.786543 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:16:07.786551 | orchestrator | 2025-07-06 20:16:07.786559 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-06 20:16:07.786567 | orchestrator | Sunday 06 July 2025 20:15:02 +0000 (0:00:25.468) 0:01:37.553 *********** 2025-07-06 20:16:07.786575 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786582 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:16:07.786590 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:16:07.786598 | orchestrator | 2025-07-06 20:16:07.786606 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-06 20:16:07.786614 | orchestrator | Sunday 06 July 2025 20:15:08 +0000 (0:00:06.541) 0:01:44.094 *********** 2025-07-06 20:16:07.786622 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786630 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:16:07.786637 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:16:07.786645 | orchestrator | 2025-07-06 20:16:07.786653 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:16:07.786668 | orchestrator | Sunday 06 July 2025 20:15:15 +0000 (0:00:06.741) 0:01:50.836 *********** 2025-07-06 20:16:07.786676 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:16:07.786684 | orchestrator | 2025-07-06 20:16:07.786691 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-06 20:16:07.786699 | orchestrator | Sunday 06 July 2025 20:15:16 +0000 (0:00:00.710) 0:01:51.547 *********** 2025-07-06 20:16:07.786707 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:16:07.786715 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.786723 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:16:07.786731 | orchestrator | 2025-07-06 20:16:07.786739 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-06 20:16:07.786747 | orchestrator | Sunday 06 July 2025 20:15:17 +0000 (0:00:00.709) 0:01:52.256 *********** 2025-07-06 20:16:07.786755 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:16:07.786763 | orchestrator | 2025-07-06 20:16:07.786770 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-06 20:16:07.786778 | orchestrator | Sunday 06 July 2025 20:15:18 +0000 (0:00:01.781) 0:01:54.038 *********** 2025-07-06 20:16:07.786786 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-06 20:16:07.786794 | orchestrator | 2025-07-06 20:16:07.786802 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-06 20:16:07.786810 | orchestrator | Sunday 06 July 2025 20:15:29 +0000 (0:00:11.166) 0:02:05.204 *********** 2025-07-06 20:16:07.786818 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-06 20:16:07.786825 | orchestrator | 2025-07-06 20:16:07.786833 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-06 20:16:07.786841 | orchestrator | Sunday 06 July 2025 20:15:52 +0000 (0:00:23.006) 0:02:28.211 *********** 2025-07-06 20:16:07.786849 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-06 20:16:07.786857 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-06 20:16:07.786865 | orchestrator | 2025-07-06 20:16:07.786873 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-06 20:16:07.786881 | orchestrator | Sunday 06 July 2025 20:16:00 +0000 (0:00:07.545) 0:02:35.757 *********** 2025-07-06 20:16:07.786888 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.786896 | orchestrator | 2025-07-06 20:16:07.786904 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-06 20:16:07.786912 | orchestrator | Sunday 06 July 2025 20:16:01 +0000 (0:00:00.793) 0:02:36.550 *********** 2025-07-06 20:16:07.786920 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.786928 | orchestrator | 2025-07-06 20:16:07.786936 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-06 20:16:07.786943 | orchestrator | Sunday 06 July 2025 20:16:01 +0000 (0:00:00.350) 0:02:36.900 *********** 2025-07-06 20:16:07.786951 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.786959 | orchestrator | 2025-07-06 20:16:07.786967 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-06 20:16:07.786975 | orchestrator | Sunday 06 July 2025 20:16:01 +0000 (0:00:00.275) 0:02:37.176 *********** 2025-07-06 20:16:07.786983 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.786991 | orchestrator | 2025-07-06 20:16:07.786998 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-06 20:16:07.787006 | orchestrator | Sunday 06 July 2025 20:16:02 +0000 (0:00:00.344) 0:02:37.520 *********** 2025-07-06 20:16:07.787014 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:16:07.787022 | orchestrator | 2025-07-06 20:16:07.787030 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:16:07.787042 | orchestrator | Sunday 06 July 2025 20:16:05 +0000 (0:00:03.396) 0:02:40.917 *********** 2025-07-06 20:16:07.787058 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:16:07.787066 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:16:07.787074 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:16:07.787082 | orchestrator | 2025-07-06 20:16:07.787090 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:16:07.787102 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-06 20:16:07.787111 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-06 20:16:07.787119 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-06 20:16:07.787127 | orchestrator | 2025-07-06 20:16:07.787135 | orchestrator | 2025-07-06 20:16:07.787143 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:16:07.787151 | orchestrator | Sunday 06 July 2025 20:16:06 +0000 (0:00:01.040) 0:02:41.957 *********** 2025-07-06 20:16:07.787159 | orchestrator | =============================================================================== 2025-07-06 20:16:07.787166 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.47s 2025-07-06 20:16:07.787174 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.01s 2025-07-06 20:16:07.787182 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.67s 2025-07-06 20:16:07.787190 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.17s 2025-07-06 20:16:07.787198 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.74s 2025-07-06 20:16:07.787227 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.86s 2025-07-06 20:16:07.787235 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.55s 2025-07-06 20:16:07.787243 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.74s 2025-07-06 20:16:07.787251 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 6.54s 2025-07-06 20:16:07.787259 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.59s 2025-07-06 20:16:07.787267 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.60s 2025-07-06 20:16:07.787275 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.54s 2025-07-06 20:16:07.787282 | orchestrator | keystone : Creating default user role ----------------------------------- 3.40s 2025-07-06 20:16:07.787290 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.61s 2025-07-06 20:16:07.787298 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.45s 2025-07-06 20:16:07.787306 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2025-07-06 20:16:07.787314 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.23s 2025-07-06 20:16:07.787322 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.17s 2025-07-06 20:16:07.787329 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.00s 2025-07-06 20:16:07.787337 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2025-07-06 20:16:07.787345 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task 4a065d7e-c1a9-4023-9bad-cc54d11d0263 is in state SUCCESS 2025-07-06 20:16:07.787353 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:07.787361 | orchestrator | 2025-07-06 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:10.813701 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:10.813799 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:10.815165 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:10.816277 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:10.817456 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:10.817654 | orchestrator | 2025-07-06 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:13.851759 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:13.851873 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:13.852693 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:13.855722 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:13.856664 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:13.856686 | orchestrator | 2025-07-06 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:16.884507 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:16.885650 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:16.886095 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:16.886755 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:16.887711 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:16.888113 | orchestrator | 2025-07-06 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:19.921520 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:19.921608 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:19.922917 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:19.923739 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:19.925420 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:19.925443 | orchestrator | 2025-07-06 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:22.955614 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:22.955723 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:22.956294 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:22.957284 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:22.957711 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:22.957844 | orchestrator | 2025-07-06 20:16:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:25.980138 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state STARTED 2025-07-06 20:16:25.980857 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:25.980906 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:25.981323 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:25.981910 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:25.981945 | orchestrator | 2025-07-06 20:16:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:29.008881 | orchestrator | 2025-07-06 20:16:29 | INFO  | Task e70e0b9f-02fe-45a7-a358-3f09f5c6890c is in state SUCCESS 2025-07-06 20:16:29.008989 | orchestrator | 2025-07-06 20:16:29 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:29.009005 | orchestrator | 2025-07-06 20:16:29 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:29.009017 | orchestrator | 2025-07-06 20:16:29 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:29.009418 | orchestrator | 2025-07-06 20:16:29 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:29.011008 | orchestrator | 2025-07-06 20:16:29 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:29.011078 | orchestrator | 2025-07-06 20:16:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:32.047579 | orchestrator | 2025-07-06 20:16:32 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:32.050455 | orchestrator | 2025-07-06 20:16:32 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:32.053411 | orchestrator | 2025-07-06 20:16:32 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:32.053459 | orchestrator | 2025-07-06 20:16:32 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:32.053888 | orchestrator | 2025-07-06 20:16:32 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:32.053935 | orchestrator | 2025-07-06 20:16:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:35.097426 | orchestrator | 2025-07-06 20:16:35 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:35.099450 | orchestrator | 2025-07-06 20:16:35 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:35.102376 | orchestrator | 2025-07-06 20:16:35 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:35.105099 | orchestrator | 2025-07-06 20:16:35 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:35.106616 | orchestrator | 2025-07-06 20:16:35 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:35.107330 | orchestrator | 2025-07-06 20:16:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:38.153112 | orchestrator | 2025-07-06 20:16:38 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:38.153239 | orchestrator | 2025-07-06 20:16:38 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:38.153254 | orchestrator | 2025-07-06 20:16:38 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:38.153293 | orchestrator | 2025-07-06 20:16:38 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:38.153304 | orchestrator | 2025-07-06 20:16:38 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:38.153314 | orchestrator | 2025-07-06 20:16:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:41.188699 | orchestrator | 2025-07-06 20:16:41 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:41.189056 | orchestrator | 2025-07-06 20:16:41 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:41.189822 | orchestrator | 2025-07-06 20:16:41 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:41.190909 | orchestrator | 2025-07-06 20:16:41 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:41.193723 | orchestrator | 2025-07-06 20:16:41 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:41.193774 | orchestrator | 2025-07-06 20:16:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:44.215378 | orchestrator | 2025-07-06 20:16:44 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:44.215715 | orchestrator | 2025-07-06 20:16:44 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:44.216495 | orchestrator | 2025-07-06 20:16:44 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:44.217385 | orchestrator | 2025-07-06 20:16:44 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:44.218132 | orchestrator | 2025-07-06 20:16:44 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:44.218259 | orchestrator | 2025-07-06 20:16:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:47.246699 | orchestrator | 2025-07-06 20:16:47 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:47.248087 | orchestrator | 2025-07-06 20:16:47 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:47.248896 | orchestrator | 2025-07-06 20:16:47 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:47.249773 | orchestrator | 2025-07-06 20:16:47 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:47.251796 | orchestrator | 2025-07-06 20:16:47 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:47.251821 | orchestrator | 2025-07-06 20:16:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:50.279009 | orchestrator | 2025-07-06 20:16:50 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:50.280822 | orchestrator | 2025-07-06 20:16:50 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:50.281503 | orchestrator | 2025-07-06 20:16:50 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:50.282396 | orchestrator | 2025-07-06 20:16:50 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:50.300659 | orchestrator | 2025-07-06 20:16:50 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:50.300718 | orchestrator | 2025-07-06 20:16:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:53.311278 | orchestrator | 2025-07-06 20:16:53 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:53.311752 | orchestrator | 2025-07-06 20:16:53 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:53.313294 | orchestrator | 2025-07-06 20:16:53 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:53.314010 | orchestrator | 2025-07-06 20:16:53 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:53.314725 | orchestrator | 2025-07-06 20:16:53 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:53.314873 | orchestrator | 2025-07-06 20:16:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:56.344444 | orchestrator | 2025-07-06 20:16:56 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:56.346648 | orchestrator | 2025-07-06 20:16:56 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:56.347284 | orchestrator | 2025-07-06 20:16:56 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:56.348100 | orchestrator | 2025-07-06 20:16:56 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:56.348784 | orchestrator | 2025-07-06 20:16:56 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:56.348811 | orchestrator | 2025-07-06 20:16:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:59.376625 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:16:59.377098 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:16:59.377847 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:16:59.378693 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:16:59.379925 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:16:59.380001 | orchestrator | 2025-07-06 20:16:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:02.412252 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:02.412336 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:02.412812 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:02.413207 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:02.413752 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:17:02.413773 | orchestrator | 2025-07-06 20:17:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:05.443893 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:05.444001 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:05.444909 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:05.445598 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:05.446150 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:17:05.446273 | orchestrator | 2025-07-06 20:17:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:08.468354 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:08.468451 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:08.469491 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:08.471008 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:08.472099 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:17:08.474250 | orchestrator | 2025-07-06 20:17:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:11.496877 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:11.496990 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:11.498834 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:11.499344 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:11.500850 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state STARTED 2025-07-06 20:17:11.500887 | orchestrator | 2025-07-06 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:14.530156 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:14.530304 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:14.530669 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:14.531090 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:14.531660 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task 301783a8-43f5-4cba-9178-a86704b5dae1 is in state SUCCESS 2025-07-06 20:17:14.532019 | orchestrator | 2025-07-06 20:17:14.532045 | orchestrator | 2025-07-06 20:17:14.532057 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:17:14.532069 | orchestrator | 2025-07-06 20:17:14.532081 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:17:14.532092 | orchestrator | Sunday 06 July 2025 20:15:53 +0000 (0:00:00.309) 0:00:00.309 *********** 2025-07-06 20:17:14.532103 | orchestrator | ok: [testbed-manager] 2025-07-06 20:17:14.532115 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.532127 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.532138 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.532149 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.532160 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.532238 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.532252 | orchestrator | 2025-07-06 20:17:14.532264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:17:14.532358 | orchestrator | Sunday 06 July 2025 20:15:54 +0000 (0:00:00.896) 0:00:01.205 *********** 2025-07-06 20:17:14.532374 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532385 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532397 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532408 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532446 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532457 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532468 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-06 20:17:14.532479 | orchestrator | 2025-07-06 20:17:14.532490 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-06 20:17:14.532501 | orchestrator | 2025-07-06 20:17:14.532512 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-06 20:17:14.532523 | orchestrator | Sunday 06 July 2025 20:15:55 +0000 (0:00:01.343) 0:00:02.548 *********** 2025-07-06 20:17:14.532536 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.532548 | orchestrator | 2025-07-06 20:17:14.532559 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-06 20:17:14.532570 | orchestrator | Sunday 06 July 2025 20:15:56 +0000 (0:00:01.387) 0:00:03.936 *********** 2025-07-06 20:17:14.532581 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-07-06 20:17:14.532592 | orchestrator | 2025-07-06 20:17:14.532603 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-06 20:17:14.532615 | orchestrator | Sunday 06 July 2025 20:16:00 +0000 (0:00:03.414) 0:00:07.350 *********** 2025-07-06 20:17:14.532627 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-06 20:17:14.532641 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-06 20:17:14.532653 | orchestrator | 2025-07-06 20:17:14.532665 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-06 20:17:14.532677 | orchestrator | Sunday 06 July 2025 20:16:06 +0000 (0:00:06.205) 0:00:13.555 *********** 2025-07-06 20:17:14.532690 | orchestrator | ok: [testbed-manager] => (item=service) 2025-07-06 20:17:14.532703 | orchestrator | 2025-07-06 20:17:14.532716 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-06 20:17:14.532726 | orchestrator | Sunday 06 July 2025 20:16:09 +0000 (0:00:03.369) 0:00:16.924 *********** 2025-07-06 20:17:14.532751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:17:14.532762 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-07-06 20:17:14.532773 | orchestrator | 2025-07-06 20:17:14.532784 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-06 20:17:14.532795 | orchestrator | Sunday 06 July 2025 20:16:13 +0000 (0:00:03.893) 0:00:20.818 *********** 2025-07-06 20:17:14.532806 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-07-06 20:17:14.532817 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-07-06 20:17:14.532828 | orchestrator | 2025-07-06 20:17:14.532838 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-06 20:17:14.532849 | orchestrator | Sunday 06 July 2025 20:16:19 +0000 (0:00:05.715) 0:00:26.533 *********** 2025-07-06 20:17:14.532860 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-07-06 20:17:14.532871 | orchestrator | 2025-07-06 20:17:14.532882 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:17:14.532930 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.532941 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.532953 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.532963 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.532983 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.533008 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.533020 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.533031 | orchestrator | 2025-07-06 20:17:14.533042 | orchestrator | 2025-07-06 20:17:14.533053 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:17:14.533064 | orchestrator | Sunday 06 July 2025 20:16:25 +0000 (0:00:05.705) 0:00:32.239 *********** 2025-07-06 20:17:14.533075 | orchestrator | =============================================================================== 2025-07-06 20:17:14.533086 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.21s 2025-07-06 20:17:14.533096 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.72s 2025-07-06 20:17:14.533107 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.71s 2025-07-06 20:17:14.533118 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.89s 2025-07-06 20:17:14.533129 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.41s 2025-07-06 20:17:14.533140 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.37s 2025-07-06 20:17:14.533208 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.39s 2025-07-06 20:17:14.533223 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2025-07-06 20:17:14.533234 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-07-06 20:17:14.533245 | orchestrator | 2025-07-06 20:17:14.533256 | orchestrator | 2025-07-06 20:17:14.533267 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-06 20:17:14.533278 | orchestrator | 2025-07-06 20:17:14.533288 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-06 20:17:14.533299 | orchestrator | Sunday 06 July 2025 20:15:45 +0000 (0:00:00.296) 0:00:00.296 *********** 2025-07-06 20:17:14.533310 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533321 | orchestrator | 2025-07-06 20:17:14.533332 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-06 20:17:14.533342 | orchestrator | Sunday 06 July 2025 20:15:47 +0000 (0:00:02.205) 0:00:02.502 *********** 2025-07-06 20:17:14.533353 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533364 | orchestrator | 2025-07-06 20:17:14.533375 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-06 20:17:14.533386 | orchestrator | Sunday 06 July 2025 20:15:48 +0000 (0:00:01.014) 0:00:03.517 *********** 2025-07-06 20:17:14.533396 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533407 | orchestrator | 2025-07-06 20:17:14.533418 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-06 20:17:14.533429 | orchestrator | Sunday 06 July 2025 20:15:49 +0000 (0:00:01.112) 0:00:04.629 *********** 2025-07-06 20:17:14.533440 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533450 | orchestrator | 2025-07-06 20:17:14.533461 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-06 20:17:14.533472 | orchestrator | Sunday 06 July 2025 20:15:51 +0000 (0:00:01.286) 0:00:05.916 *********** 2025-07-06 20:17:14.533483 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533494 | orchestrator | 2025-07-06 20:17:14.533504 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-06 20:17:14.533515 | orchestrator | Sunday 06 July 2025 20:15:52 +0000 (0:00:01.061) 0:00:06.978 *********** 2025-07-06 20:17:14.533526 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533537 | orchestrator | 2025-07-06 20:17:14.533547 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-06 20:17:14.533574 | orchestrator | Sunday 06 July 2025 20:15:53 +0000 (0:00:01.103) 0:00:08.081 *********** 2025-07-06 20:17:14.533585 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533596 | orchestrator | 2025-07-06 20:17:14.533607 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-06 20:17:14.533618 | orchestrator | Sunday 06 July 2025 20:15:55 +0000 (0:00:02.024) 0:00:10.106 *********** 2025-07-06 20:17:14.533628 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533639 | orchestrator | 2025-07-06 20:17:14.533650 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-06 20:17:14.533661 | orchestrator | Sunday 06 July 2025 20:15:56 +0000 (0:00:00.967) 0:00:11.074 *********** 2025-07-06 20:17:14.533671 | orchestrator | changed: [testbed-manager] 2025-07-06 20:17:14.533682 | orchestrator | 2025-07-06 20:17:14.533693 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-06 20:17:14.533704 | orchestrator | Sunday 06 July 2025 20:16:48 +0000 (0:00:51.924) 0:01:02.998 *********** 2025-07-06 20:17:14.533715 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:17:14.533725 | orchestrator | 2025-07-06 20:17:14.533736 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-06 20:17:14.533747 | orchestrator | 2025-07-06 20:17:14.533758 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-06 20:17:14.533769 | orchestrator | Sunday 06 July 2025 20:16:48 +0000 (0:00:00.166) 0:01:03.165 *********** 2025-07-06 20:17:14.533779 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.533790 | orchestrator | 2025-07-06 20:17:14.533801 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-06 20:17:14.533812 | orchestrator | 2025-07-06 20:17:14.533823 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-06 20:17:14.533833 | orchestrator | Sunday 06 July 2025 20:16:49 +0000 (0:00:01.514) 0:01:04.679 *********** 2025-07-06 20:17:14.533844 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.533855 | orchestrator | 2025-07-06 20:17:14.533866 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-06 20:17:14.533877 | orchestrator | 2025-07-06 20:17:14.533888 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-06 20:17:14.533899 | orchestrator | Sunday 06 July 2025 20:17:01 +0000 (0:00:11.349) 0:01:16.028 *********** 2025-07-06 20:17:14.533909 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.533920 | orchestrator | 2025-07-06 20:17:14.533939 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:17:14.533950 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 20:17:14.533961 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.533972 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.533983 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:17:14.533994 | orchestrator | 2025-07-06 20:17:14.534005 | orchestrator | 2025-07-06 20:17:14.534064 | orchestrator | 2025-07-06 20:17:14.534079 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:17:14.534090 | orchestrator | Sunday 06 July 2025 20:17:12 +0000 (0:00:11.285) 0:01:27.314 *********** 2025-07-06 20:17:14.534101 | orchestrator | =============================================================================== 2025-07-06 20:17:14.534112 | orchestrator | Create admin user ------------------------------------------------------ 51.92s 2025-07-06 20:17:14.534123 | orchestrator | Restart ceph manager service ------------------------------------------- 24.15s 2025-07-06 20:17:14.534134 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.21s 2025-07-06 20:17:14.534152 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.02s 2025-07-06 20:17:14.534163 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2025-07-06 20:17:14.534195 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.11s 2025-07-06 20:17:14.534207 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.10s 2025-07-06 20:17:14.534217 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-07-06 20:17:14.534228 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-07-06 20:17:14.534239 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.97s 2025-07-06 20:17:14.534250 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-07-06 20:17:14.534261 | orchestrator | 2025-07-06 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:17.556249 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:17.557746 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:17.557860 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:17.558590 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:17.558625 | orchestrator | 2025-07-06 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:20.588592 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:20.588702 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:20.589361 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:20.589757 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:20.590136 | orchestrator | 2025-07-06 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:23.623538 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:23.625272 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:23.626295 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:23.626321 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:23.626332 | orchestrator | 2025-07-06 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:26.657040 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:26.659887 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:26.660466 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:26.660977 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:26.661025 | orchestrator | 2025-07-06 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:29.685670 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:29.685814 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:29.686158 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:29.687041 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:29.687065 | orchestrator | 2025-07-06 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:32.723030 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:32.723542 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:32.725290 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:32.726103 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:32.726453 | orchestrator | 2025-07-06 20:17:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:35.764613 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:35.766495 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:35.770530 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:35.770585 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:35.770599 | orchestrator | 2025-07-06 20:17:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:38.813945 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:38.814097 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:38.815444 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:38.818229 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:38.818290 | orchestrator | 2025-07-06 20:17:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:41.858376 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:41.858518 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:41.859000 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:41.859744 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:41.860456 | orchestrator | 2025-07-06 20:17:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:44.892887 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:44.893360 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:44.893962 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:44.895001 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:44.895095 | orchestrator | 2025-07-06 20:17:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:47.933982 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:47.934942 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:47.936325 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:47.937650 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:47.937669 | orchestrator | 2025-07-06 20:17:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:50.990857 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:50.991786 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:50.992900 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:50.993872 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:50.993897 | orchestrator | 2025-07-06 20:17:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:54.044745 | orchestrator | 2025-07-06 20:17:54 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:54.047379 | orchestrator | 2025-07-06 20:17:54 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:54.048873 | orchestrator | 2025-07-06 20:17:54 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:54.051206 | orchestrator | 2025-07-06 20:17:54 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:54.051250 | orchestrator | 2025-07-06 20:17:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:57.107251 | orchestrator | 2025-07-06 20:17:57 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:17:57.109762 | orchestrator | 2025-07-06 20:17:57 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:17:57.112334 | orchestrator | 2025-07-06 20:17:57 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:17:57.114652 | orchestrator | 2025-07-06 20:17:57 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:17:57.116717 | orchestrator | 2025-07-06 20:17:57 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state STARTED 2025-07-06 20:17:57.117103 | orchestrator | 2025-07-06 20:17:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:00.167751 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:00.170100 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:00.172531 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:00.174464 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:00.176481 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state STARTED 2025-07-06 20:18:00.176567 | orchestrator | 2025-07-06 20:18:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:03.228300 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:03.228475 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:03.230145 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:03.230955 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:03.233953 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state STARTED 2025-07-06 20:18:03.233989 | orchestrator | 2025-07-06 20:18:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:06.275511 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:06.275818 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:06.276834 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:06.277729 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:06.278617 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state STARTED 2025-07-06 20:18:06.278644 | orchestrator | 2025-07-06 20:18:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:09.326571 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:09.328281 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:09.329494 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:09.330866 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:09.332309 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state STARTED 2025-07-06 20:18:09.332517 | orchestrator | 2025-07-06 20:18:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:12.370103 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:12.373067 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:12.373768 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:12.374365 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:12.375312 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state STARTED 2025-07-06 20:18:12.378530 | orchestrator | 2025-07-06 20:18:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:15.410458 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:15.410797 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:15.411896 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:15.415973 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:15.416264 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task 01544b4b-7cac-4c7c-a9a8-967f82d18af6 is in state SUCCESS 2025-07-06 20:18:15.416725 | orchestrator | 2025-07-06 20:18:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:18.456207 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:18.456311 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:18.456326 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:18.456338 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:18.456350 | orchestrator | 2025-07-06 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:21.480482 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:21.480802 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:21.482702 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:21.483352 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:21.483467 | orchestrator | 2025-07-06 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:24.530366 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:24.532705 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:24.535477 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:24.537769 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:24.537803 | orchestrator | 2025-07-06 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:27.572523 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:27.575070 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:27.579274 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:27.580587 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:27.580612 | orchestrator | 2025-07-06 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:30.625239 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:30.625982 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:30.628262 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:30.629252 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:30.629280 | orchestrator | 2025-07-06 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:33.660340 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:33.660509 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:33.661432 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:33.662219 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:33.662256 | orchestrator | 2025-07-06 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:36.705190 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:36.706860 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:36.708077 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:36.709443 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:36.709469 | orchestrator | 2025-07-06 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:39.754392 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:39.754669 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:39.755722 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:39.757023 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:39.757054 | orchestrator | 2025-07-06 20:18:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:42.803072 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:42.803243 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:42.804304 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:42.806844 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:42.806903 | orchestrator | 2025-07-06 20:18:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:45.874535 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:45.874863 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state STARTED 2025-07-06 20:18:45.876650 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:45.877520 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:45.877568 | orchestrator | 2025-07-06 20:18:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:48.915559 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:48.918304 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task bf5ab1a1-1363-4aa8-9cf4-0f681e0163be is in state SUCCESS 2025-07-06 20:18:48.920373 | orchestrator | 2025-07-06 20:18:48.920410 | orchestrator | None 2025-07-06 20:18:48.920419 | orchestrator | 2025-07-06 20:18:48.920481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:18:48.920491 | orchestrator | 2025-07-06 20:18:48.920498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:18:48.920506 | orchestrator | Sunday 06 July 2025 20:15:53 +0000 (0:00:00.329) 0:00:00.329 *********** 2025-07-06 20:18:48.920513 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:48.920543 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:48.920550 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:48.920557 | orchestrator | 2025-07-06 20:18:48.920564 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:18:48.920571 | orchestrator | Sunday 06 July 2025 20:15:53 +0000 (0:00:00.400) 0:00:00.730 *********** 2025-07-06 20:18:48.920578 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-06 20:18:48.920586 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-06 20:18:48.920592 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-06 20:18:48.920600 | orchestrator | 2025-07-06 20:18:48.920607 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-06 20:18:48.920614 | orchestrator | 2025-07-06 20:18:48.920621 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:18:48.920628 | orchestrator | Sunday 06 July 2025 20:15:53 +0000 (0:00:00.532) 0:00:01.263 *********** 2025-07-06 20:18:48.920634 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:18:48.920643 | orchestrator | 2025-07-06 20:18:48.920649 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-06 20:18:48.920656 | orchestrator | Sunday 06 July 2025 20:15:54 +0000 (0:00:00.565) 0:00:01.829 *********** 2025-07-06 20:18:48.920663 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-06 20:18:48.920670 | orchestrator | 2025-07-06 20:18:48.920677 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-06 20:18:48.920684 | orchestrator | Sunday 06 July 2025 20:15:58 +0000 (0:00:03.888) 0:00:05.718 *********** 2025-07-06 20:18:48.920690 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-06 20:18:48.920698 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-06 20:18:48.920704 | orchestrator | 2025-07-06 20:18:48.920711 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-06 20:18:48.920718 | orchestrator | Sunday 06 July 2025 20:16:05 +0000 (0:00:07.004) 0:00:12.722 *********** 2025-07-06 20:18:48.920725 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-06 20:18:48.920732 | orchestrator | 2025-07-06 20:18:48.920738 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-06 20:18:48.920745 | orchestrator | Sunday 06 July 2025 20:16:08 +0000 (0:00:03.349) 0:00:16.071 *********** 2025-07-06 20:18:48.920753 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:18:48.920760 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-06 20:18:48.920766 | orchestrator | 2025-07-06 20:18:48.920773 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-06 20:18:48.920780 | orchestrator | Sunday 06 July 2025 20:16:12 +0000 (0:00:04.221) 0:00:20.293 *********** 2025-07-06 20:18:48.920787 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:18:48.920794 | orchestrator | 2025-07-06 20:18:48.920800 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-06 20:18:48.920807 | orchestrator | Sunday 06 July 2025 20:16:16 +0000 (0:00:03.189) 0:00:23.482 *********** 2025-07-06 20:18:48.920814 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-06 20:18:48.920821 | orchestrator | 2025-07-06 20:18:48.920828 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-06 20:18:48.920835 | orchestrator | Sunday 06 July 2025 20:16:20 +0000 (0:00:04.136) 0:00:27.619 *********** 2025-07-06 20:18:48.920891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.920911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.920923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.920938 | orchestrator | 2025-07-06 20:18:48.920946 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:18:48.920952 | orchestrator | Sunday 06 July 2025 20:16:26 +0000 (0:00:06.679) 0:00:34.299 *********** 2025-07-06 20:18:48.920965 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:18:48.920973 | orchestrator | 2025-07-06 20:18:48.920981 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-06 20:18:48.920989 | orchestrator | Sunday 06 July 2025 20:16:27 +0000 (0:00:00.560) 0:00:34.859 *********** 2025-07-06 20:18:48.920997 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:48.921004 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.921012 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:48.921019 | orchestrator | 2025-07-06 20:18:48.921027 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-06 20:18:48.921034 | orchestrator | Sunday 06 July 2025 20:16:31 +0000 (0:00:03.589) 0:00:38.449 *********** 2025-07-06 20:18:48.921042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:18:48.921050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:18:48.921058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:18:48.921066 | orchestrator | 2025-07-06 20:18:48.921073 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-06 20:18:48.921081 | orchestrator | Sunday 06 July 2025 20:16:32 +0000 (0:00:01.467) 0:00:39.917 *********** 2025-07-06 20:18:48.921088 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:18:48.921096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:18:48.921104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:18:48.921111 | orchestrator | 2025-07-06 20:18:48.921119 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-06 20:18:48.921126 | orchestrator | Sunday 06 July 2025 20:16:33 +0000 (0:00:01.228) 0:00:41.146 *********** 2025-07-06 20:18:48.921133 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:48.921172 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:48.921180 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:48.921188 | orchestrator | 2025-07-06 20:18:48.921195 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-06 20:18:48.921203 | orchestrator | Sunday 06 July 2025 20:16:34 +0000 (0:00:00.901) 0:00:42.048 *********** 2025-07-06 20:18:48.921211 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921219 | orchestrator | 2025-07-06 20:18:48.921226 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-06 20:18:48.921234 | orchestrator | Sunday 06 July 2025 20:16:34 +0000 (0:00:00.139) 0:00:42.187 *********** 2025-07-06 20:18:48.921241 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921249 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921261 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921269 | orchestrator | 2025-07-06 20:18:48.921277 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:18:48.921285 | orchestrator | Sunday 06 July 2025 20:16:35 +0000 (0:00:00.283) 0:00:42.471 *********** 2025-07-06 20:18:48.921292 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:18:48.921300 | orchestrator | 2025-07-06 20:18:48.921308 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-06 20:18:48.921316 | orchestrator | Sunday 06 July 2025 20:16:35 +0000 (0:00:00.521) 0:00:42.993 *********** 2025-07-06 20:18:48.921333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.921343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.921360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.921368 | orchestrator | 2025-07-06 20:18:48.921375 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-06 20:18:48.921381 | orchestrator | Sunday 06 July 2025 20:16:40 +0000 (0:00:04.919) 0:00:47.912 *********** 2025-07-06 20:18:48.921394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:18:48.921402 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:18:48.921424 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:18:48.921445 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921452 | orchestrator | 2025-07-06 20:18:48.921459 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-06 20:18:48.921466 | orchestrator | Sunday 06 July 2025 20:16:43 +0000 (0:00:02.933) 0:00:50.846 *********** 2025-07-06 20:18:48.921473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:18:48.921484 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:18:48.921507 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:18:48.921526 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921533 | orchestrator | 2025-07-06 20:18:48.921540 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-06 20:18:48.921546 | orchestrator | Sunday 06 July 2025 20:16:46 +0000 (0:00:03.077) 0:00:53.923 *********** 2025-07-06 20:18:48.921553 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921560 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921567 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921573 | orchestrator | 2025-07-06 20:18:48.921580 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-06 20:18:48.921587 | orchestrator | Sunday 06 July 2025 20:16:50 +0000 (0:00:04.018) 0:00:57.941 *********** 2025-07-06 20:18:48.921607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.921615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.921630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.921638 | orchestrator | 2025-07-06 20:18:48.921644 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-06 20:18:48.921651 | orchestrator | Sunday 06 July 2025 20:16:55 +0000 (0:00:04.460) 0:01:02.402 *********** 2025-07-06 20:18:48.921658 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.921665 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:48.921672 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:48.921678 | orchestrator | 2025-07-06 20:18:48.921685 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-06 20:18:48.921845 | orchestrator | Sunday 06 July 2025 20:17:01 +0000 (0:00:06.667) 0:01:09.069 *********** 2025-07-06 20:18:48.921858 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921865 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921871 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921878 | orchestrator | 2025-07-06 20:18:48.921885 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-06 20:18:48.921892 | orchestrator | Sunday 06 July 2025 20:17:05 +0000 (0:00:03.782) 0:01:12.851 *********** 2025-07-06 20:18:48.921898 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921911 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921918 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921924 | orchestrator | 2025-07-06 20:18:48.921931 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-06 20:18:48.921938 | orchestrator | Sunday 06 July 2025 20:17:10 +0000 (0:00:04.875) 0:01:17.726 *********** 2025-07-06 20:18:48.921945 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921951 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.921958 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921964 | orchestrator | 2025-07-06 20:18:48.921971 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-06 20:18:48.921978 | orchestrator | Sunday 06 July 2025 20:17:15 +0000 (0:00:05.020) 0:01:22.747 *********** 2025-07-06 20:18:48.921985 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.921991 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.921998 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.922005 | orchestrator | 2025-07-06 20:18:48.922011 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-06 20:18:48.922166 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:03.676) 0:01:26.423 *********** 2025-07-06 20:18:48.922176 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.922183 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.922190 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.922196 | orchestrator | 2025-07-06 20:18:48.922203 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-06 20:18:48.922210 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.289) 0:01:26.712 *********** 2025-07-06 20:18:48.922217 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-06 20:18:48.922224 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.922231 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-06 20:18:48.922238 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.922244 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-06 20:18:48.922251 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.922258 | orchestrator | 2025-07-06 20:18:48.922265 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-06 20:18:48.922272 | orchestrator | Sunday 06 July 2025 20:17:25 +0000 (0:00:05.665) 0:01:32.378 *********** 2025-07-06 20:18:48.922285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.922309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.922339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:18:48.922348 | orchestrator | 2025-07-06 20:18:48.922355 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:18:48.922362 | orchestrator | Sunday 06 July 2025 20:17:30 +0000 (0:00:04.998) 0:01:37.376 *********** 2025-07-06 20:18:48.922368 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:48.922380 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:48.922387 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:48.922393 | orchestrator | 2025-07-06 20:18:48.922400 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-06 20:18:48.922407 | orchestrator | Sunday 06 July 2025 20:17:30 +0000 (0:00:00.535) 0:01:37.912 *********** 2025-07-06 20:18:48.922414 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.922433 | orchestrator | 2025-07-06 20:18:48.922440 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-06 20:18:48.922447 | orchestrator | Sunday 06 July 2025 20:17:32 +0000 (0:00:02.032) 0:01:39.945 *********** 2025-07-06 20:18:48.922453 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.922461 | orchestrator | 2025-07-06 20:18:48.922469 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-06 20:18:48.922477 | orchestrator | Sunday 06 July 2025 20:17:34 +0000 (0:00:02.115) 0:01:42.060 *********** 2025-07-06 20:18:48.922484 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.922492 | orchestrator | 2025-07-06 20:18:48.922500 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-06 20:18:48.922511 | orchestrator | Sunday 06 July 2025 20:17:36 +0000 (0:00:01.997) 0:01:44.058 *********** 2025-07-06 20:18:48.922519 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.922527 | orchestrator | 2025-07-06 20:18:48.922533 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-06 20:18:48.922540 | orchestrator | Sunday 06 July 2025 20:18:08 +0000 (0:00:32.067) 0:02:16.125 *********** 2025-07-06 20:18:48.922547 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.922554 | orchestrator | 2025-07-06 20:18:48.922561 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-06 20:18:48.922567 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:02.707) 0:02:18.833 *********** 2025-07-06 20:18:48.922574 | orchestrator | 2025-07-06 20:18:48.922581 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-06 20:18:48.922588 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:00.164) 0:02:18.997 *********** 2025-07-06 20:18:48.922594 | orchestrator | 2025-07-06 20:18:48.922601 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-06 20:18:48.922608 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:00.206) 0:02:19.203 *********** 2025-07-06 20:18:48.922615 | orchestrator | 2025-07-06 20:18:48.922622 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-06 20:18:48.922628 | orchestrator | Sunday 06 July 2025 20:18:12 +0000 (0:00:00.157) 0:02:19.360 *********** 2025-07-06 20:18:48.922635 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:48.922642 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:48.922648 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:48.922663 | orchestrator | 2025-07-06 20:18:48.922671 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:18:48.922679 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:18:48.922687 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:18:48.922694 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:18:48.922701 | orchestrator | 2025-07-06 20:18:48.922707 | orchestrator | 2025-07-06 20:18:48.922714 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:18:48.922721 | orchestrator | Sunday 06 July 2025 20:18:47 +0000 (0:00:35.153) 0:02:54.514 *********** 2025-07-06 20:18:48.922728 | orchestrator | =============================================================================== 2025-07-06 20:18:48.922734 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.15s 2025-07-06 20:18:48.922746 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 32.07s 2025-07-06 20:18:48.922753 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.00s 2025-07-06 20:18:48.922760 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.68s 2025-07-06 20:18:48.922766 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.67s 2025-07-06 20:18:48.922773 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.67s 2025-07-06 20:18:48.922780 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.02s 2025-07-06 20:18:48.922787 | orchestrator | glance : Check glance containers ---------------------------------------- 5.00s 2025-07-06 20:18:48.922793 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.92s 2025-07-06 20:18:48.922800 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.88s 2025-07-06 20:18:48.922807 | orchestrator | glance : Copying over config.json files for services -------------------- 4.46s 2025-07-06 20:18:48.922817 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.22s 2025-07-06 20:18:48.922824 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.14s 2025-07-06 20:18:48.922830 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.02s 2025-07-06 20:18:48.922837 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.89s 2025-07-06 20:18:48.922844 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.78s 2025-07-06 20:18:48.922851 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.68s 2025-07-06 20:18:48.922857 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.59s 2025-07-06 20:18:48.922864 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.35s 2025-07-06 20:18:48.922870 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.19s 2025-07-06 20:18:48.922877 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:18:48.922884 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:48.922961 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:48.922971 | orchestrator | 2025-07-06 20:18:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:51.965974 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:51.968905 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:18:51.973818 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:51.975113 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:51.975504 | orchestrator | 2025-07-06 20:18:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:55.028236 | orchestrator | 2025-07-06 20:18:55 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:55.028339 | orchestrator | 2025-07-06 20:18:55 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:18:55.028506 | orchestrator | 2025-07-06 20:18:55 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:55.029675 | orchestrator | 2025-07-06 20:18:55 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:55.029701 | orchestrator | 2025-07-06 20:18:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:58.064442 | orchestrator | 2025-07-06 20:18:58 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state STARTED 2025-07-06 20:18:58.065541 | orchestrator | 2025-07-06 20:18:58 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:18:58.066583 | orchestrator | 2025-07-06 20:18:58 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:18:58.067381 | orchestrator | 2025-07-06 20:18:58 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:18:58.067882 | orchestrator | 2025-07-06 20:18:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:01.118132 | orchestrator | 2025-07-06 20:19:01 | INFO  | Task d5b46331-53a1-44d5-b6d1-af475d4f612d is in state SUCCESS 2025-07-06 20:19:01.119822 | orchestrator | 2025-07-06 20:19:01.119870 | orchestrator | 2025-07-06 20:19:01.119883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:19:01.119934 | orchestrator | 2025-07-06 20:19:01.119947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:19:01.119960 | orchestrator | Sunday 06 July 2025 20:15:45 +0000 (0:00:00.284) 0:00:00.284 *********** 2025-07-06 20:19:01.119971 | orchestrator | ok: [testbed-manager] 2025-07-06 20:19:01.119984 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:01.119995 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:01.120054 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:01.120068 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:01.120079 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:01.120090 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:01.120125 | orchestrator | 2025-07-06 20:19:01.120248 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:19:01.120264 | orchestrator | Sunday 06 July 2025 20:15:46 +0000 (0:00:00.893) 0:00:01.178 *********** 2025-07-06 20:19:01.120276 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120287 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120299 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120309 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120320 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120331 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120342 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-06 20:19:01.120352 | orchestrator | 2025-07-06 20:19:01.120458 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-06 20:19:01.120475 | orchestrator | 2025-07-06 20:19:01.120489 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-06 20:19:01.120502 | orchestrator | Sunday 06 July 2025 20:15:47 +0000 (0:00:00.709) 0:00:01.888 *********** 2025-07-06 20:19:01.120516 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:19:01.120529 | orchestrator | 2025-07-06 20:19:01.120542 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-06 20:19:01.120555 | orchestrator | Sunday 06 July 2025 20:15:48 +0000 (0:00:01.556) 0:00:03.444 *********** 2025-07-06 20:19:01.120572 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:19:01.120614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.120629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.120643 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.120674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.120689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.120729 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.120744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.120801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.120813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.120834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.120953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.120972 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:19:01.120992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121304 | orchestrator | 2025-07-06 20:19:01.121316 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-06 20:19:01.121327 | orchestrator | Sunday 06 July 2025 20:15:52 +0000 (0:00:03.876) 0:00:07.320 *********** 2025-07-06 20:19:01.121339 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:19:01.121350 | orchestrator | 2025-07-06 20:19:01.121361 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-06 20:19:01.121372 | orchestrator | Sunday 06 July 2025 20:15:54 +0000 (0:00:01.500) 0:00:08.821 *********** 2025-07-06 20:19:01.121384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:19:01.121396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.121496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121538 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:19:01.121609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121723 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.121735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.121764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.122953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.122988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.123019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.123032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.123044 | orchestrator | 2025-07-06 20:19:01.123055 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-06 20:19:01.123066 | orchestrator | Sunday 06 July 2025 20:16:00 +0000 (0:00:06.018) 0:00:14.839 *********** 2025-07-06 20:19:01.123093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:19:01.123106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:19:01.123237 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123249 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.123261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123320 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.123341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123413 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.123424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123505 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.123517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123552 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.123563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123652 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.123664 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.123675 | orchestrator | 2025-07-06 20:19:01.123686 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-06 20:19:01.123697 | orchestrator | Sunday 06 July 2025 20:16:02 +0000 (0:00:02.225) 0:00:17.065 *********** 2025-07-06 20:19:01.123708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:19:01.123718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123752 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:19:01.123782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123867 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.123877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.123928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123938 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.123954 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.123964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.123974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.123990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.124000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:19:01.124025 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.124035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.124046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124091 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.124101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.124117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124168 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.124192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:19:01.124210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:19:01.124243 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.124262 | orchestrator | 2025-07-06 20:19:01.124272 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-06 20:19:01.124282 | orchestrator | Sunday 06 July 2025 20:16:04 +0000 (0:00:01.935) 0:00:19.000 *********** 2025-07-06 20:19:01.124292 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:19:01.124302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124355 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.124427 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:19:01.124507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.124606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.124657 | orchestrator | 2025-07-06 20:19:01.124667 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-06 20:19:01.124676 | orchestrator | Sunday 06 July 2025 20:16:11 +0000 (0:00:06.800) 0:00:25.800 *********** 2025-07-06 20:19:01.124686 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:19:01.124696 | orchestrator | 2025-07-06 20:19:01.124705 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-06 20:19:01.124715 | orchestrator | Sunday 06 July 2025 20:16:11 +0000 (0:00:00.866) 0:00:26.667 *********** 2025-07-06 20:19:01.124725 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124736 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124752 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.124762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124777 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124796 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124806 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124817 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124827 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124842 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098434, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124852 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124872 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124882 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124892 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.124902 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124912 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124927 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124937 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124954 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124964 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098424, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124974 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.124995 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125006 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125235 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125316 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.125327 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098408, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125337 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125347 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125357 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125397 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125409 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125430 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125441 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125461 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125471 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125507 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125525 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125540 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125550 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125561 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098410, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.125571 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125581 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125591 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125633 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125649 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125660 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125671 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125681 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125691 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125701 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125739 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125761 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125770 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098420, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.125786 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125795 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125832 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125843 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125857 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125867 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125877 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125886 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125901 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125931 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125942 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125956 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098414, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8500295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.125966 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125975 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.125985 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126077 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126091 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126105 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126114 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126124 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126150 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126167 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126182 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126192 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126205 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126214 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098419, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8520296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126223 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126231 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126245 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126261 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126269 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126290 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126299 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126312 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126320 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126333 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126342 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126354 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098426, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126362 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126371 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126384 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126392 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126406 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126415 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126423 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.126435 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126444 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126453 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126466 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126474 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126482 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.126496 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126504 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126518 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126527 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098432, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8550296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126540 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126548 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.126557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126565 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.126574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126586 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126595 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.126603 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:19:01.126611 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.126624 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098452, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126638 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098428, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8540297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126646 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098413, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8490295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126655 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098418, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126663 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098406, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8480296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126676 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098422, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8530295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126685 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098450, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8580296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126697 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098416, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8510296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126710 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098436, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:19:01.126718 | orchestrator | 2025-07-06 20:19:01.126727 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-06 20:19:01.126735 | orchestrator | Sunday 06 July 2025 20:16:36 +0000 (0:00:24.752) 0:00:51.420 *********** 2025-07-06 20:19:01.126743 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:19:01.126751 | orchestrator | 2025-07-06 20:19:01.126759 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-06 20:19:01.126767 | orchestrator | Sunday 06 July 2025 20:16:37 +0000 (0:00:00.753) 0:00:52.173 *********** 2025-07-06 20:19:01.126775 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.126784 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126792 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.126800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126808 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.126816 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:19:01.126824 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.126832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126840 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.126848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126856 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.126864 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:19:01.126872 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.126880 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126887 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.126895 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126903 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.126911 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.126919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126927 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.126935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126943 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.126951 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.126959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126971 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.126979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.126987 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.126995 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.127003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.127011 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.127024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.127032 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.127040 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.127048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.127056 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-06 20:19:01.127064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:19:01.127072 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-06 20:19:01.127080 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 20:19:01.127088 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 20:19:01.127096 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:19:01.127104 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 20:19:01.127111 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 20:19:01.127119 | orchestrator | 2025-07-06 20:19:01.127127 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-06 20:19:01.127152 | orchestrator | Sunday 06 July 2025 20:16:39 +0000 (0:00:02.447) 0:00:54.621 *********** 2025-07-06 20:19:01.127161 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:19:01.127169 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.127177 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:19:01.127185 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:19:01.127193 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.127201 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.127209 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:19:01.127217 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127225 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:19:01.127233 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127241 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:19:01.127249 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127256 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-06 20:19:01.127264 | orchestrator | 2025-07-06 20:19:01.127272 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-06 20:19:01.127280 | orchestrator | Sunday 06 July 2025 20:16:56 +0000 (0:00:16.084) 0:01:10.706 *********** 2025-07-06 20:19:01.127288 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:19:01.127296 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.127304 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:19:01.127312 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.127320 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:19:01.127328 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.127336 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:19:01.127343 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127351 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:19:01.127359 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127367 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:19:01.127375 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127388 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-06 20:19:01.127397 | orchestrator | 2025-07-06 20:19:01.127405 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-06 20:19:01.127413 | orchestrator | Sunday 06 July 2025 20:16:59 +0000 (0:00:03.607) 0:01:14.314 *********** 2025-07-06 20:19:01.127420 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:19:01.127428 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.127437 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-06 20:19:01.127445 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:19:01.127453 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.127461 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:19:01.127474 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.127482 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:19:01.127490 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127498 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:19:01.127506 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127514 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:19:01.127522 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127530 | orchestrator | 2025-07-06 20:19:01.127538 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-06 20:19:01.127546 | orchestrator | Sunday 06 July 2025 20:17:01 +0000 (0:00:01.928) 0:01:16.242 *********** 2025-07-06 20:19:01.127554 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:19:01.127562 | orchestrator | 2025-07-06 20:19:01.127570 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-06 20:19:01.127578 | orchestrator | Sunday 06 July 2025 20:17:02 +0000 (0:00:00.784) 0:01:17.026 *********** 2025-07-06 20:19:01.127586 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.127594 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.127602 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.127610 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.127617 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127629 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127637 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127645 | orchestrator | 2025-07-06 20:19:01.127653 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-06 20:19:01.127661 | orchestrator | Sunday 06 July 2025 20:17:03 +0000 (0:00:00.817) 0:01:17.843 *********** 2025-07-06 20:19:01.127669 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.127677 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127685 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127693 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127700 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:01.127708 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:01.127716 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:01.127724 | orchestrator | 2025-07-06 20:19:01.127732 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-06 20:19:01.127740 | orchestrator | Sunday 06 July 2025 20:17:05 +0000 (0:00:02.447) 0:01:20.291 *********** 2025-07-06 20:19:01.127748 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127761 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127769 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127777 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.127785 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127792 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.127800 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.127808 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.127816 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127824 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127832 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127839 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127847 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:19:01.127855 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127863 | orchestrator | 2025-07-06 20:19:01.127871 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-06 20:19:01.127879 | orchestrator | Sunday 06 July 2025 20:17:07 +0000 (0:00:02.297) 0:01:22.588 *********** 2025-07-06 20:19:01.127887 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:19:01.127895 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.127903 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:19:01.127911 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.127918 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:19:01.127926 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.127934 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:19:01.127942 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.127950 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:19:01.127959 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:19:01.127967 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.127974 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.127982 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-06 20:19:01.127990 | orchestrator | 2025-07-06 20:19:01.127998 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-06 20:19:01.128010 | orchestrator | Sunday 06 July 2025 20:17:09 +0000 (0:00:01.967) 0:01:24.556 *********** 2025-07-06 20:19:01.128018 | orchestrator | [WARNING]: Skipped 2025-07-06 20:19:01.128027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-06 20:19:01.128034 | orchestrator | due to this access issue: 2025-07-06 20:19:01.128042 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-06 20:19:01.128050 | orchestrator | not a directory 2025-07-06 20:19:01.128058 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:19:01.128066 | orchestrator | 2025-07-06 20:19:01.128074 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-06 20:19:01.128082 | orchestrator | Sunday 06 July 2025 20:17:11 +0000 (0:00:01.623) 0:01:26.179 *********** 2025-07-06 20:19:01.128090 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.128098 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.128112 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.128120 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.128127 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.128176 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.128186 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.128194 | orchestrator | 2025-07-06 20:19:01.128202 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-06 20:19:01.128210 | orchestrator | Sunday 06 July 2025 20:17:12 +0000 (0:00:01.286) 0:01:27.466 *********** 2025-07-06 20:19:01.128218 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.128226 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:01.128233 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:01.128241 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:01.128253 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:01.128261 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:01.128269 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:01.128277 | orchestrator | 2025-07-06 20:19:01.128285 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-06 20:19:01.128293 | orchestrator | Sunday 06 July 2025 20:17:13 +0000 (0:00:00.781) 0:01:28.247 *********** 2025-07-06 20:19:01.128302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:19:01.128311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128341 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:19:01.128385 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128446 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:19:01.128456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:19:01.128581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:19:01.128609 | orchestrator | 2025-07-06 20:19:01.128616 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-06 20:19:01.128623 | orchestrator | Sunday 06 July 2025 20:17:18 +0000 (0:00:04.828) 0:01:33.076 *********** 2025-07-06 20:19:01.128630 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-06 20:19:01.128637 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:19:01.128643 | orchestrator | 2025-07-06 20:19:01.128650 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128657 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.938) 0:01:34.015 *********** 2025-07-06 20:19:01.128663 | orchestrator | 2025-07-06 20:19:01.128670 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128677 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.190) 0:01:34.205 *********** 2025-07-06 20:19:01.128683 | orchestrator | 2025-07-06 20:19:01.128690 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128697 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.103) 0:01:34.309 *********** 2025-07-06 20:19:01.128703 | orchestrator | 2025-07-06 20:19:01.128710 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128717 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.065) 0:01:34.375 *********** 2025-07-06 20:19:01.128724 | orchestrator | 2025-07-06 20:19:01.128730 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128737 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.061) 0:01:34.436 *********** 2025-07-06 20:19:01.128744 | orchestrator | 2025-07-06 20:19:01.128750 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128757 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.116) 0:01:34.553 *********** 2025-07-06 20:19:01.128764 | orchestrator | 2025-07-06 20:19:01.128770 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:19:01.128777 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.120) 0:01:34.673 *********** 2025-07-06 20:19:01.128788 | orchestrator | 2025-07-06 20:19:01.128795 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-06 20:19:01.128802 | orchestrator | Sunday 06 July 2025 20:17:20 +0000 (0:00:00.188) 0:01:34.861 *********** 2025-07-06 20:19:01.128809 | orchestrator | changed: [testbed-manager] 2025-07-06 20:19:01.128815 | orchestrator | 2025-07-06 20:19:01.128822 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-06 20:19:01.128829 | orchestrator | Sunday 06 July 2025 20:17:38 +0000 (0:00:18.069) 0:01:52.931 *********** 2025-07-06 20:19:01.128836 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:01.128842 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:19:01.128849 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:01.128856 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:19:01.128862 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:01.128869 | orchestrator | changed: [testbed-manager] 2025-07-06 20:19:01.128876 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:19:01.128882 | orchestrator | 2025-07-06 20:19:01.128889 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-06 20:19:01.128896 | orchestrator | Sunday 06 July 2025 20:17:52 +0000 (0:00:14.367) 0:02:07.299 *********** 2025-07-06 20:19:01.128902 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:01.128909 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:01.128916 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:01.128922 | orchestrator | 2025-07-06 20:19:01.128929 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-06 20:19:01.128936 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:10.520) 0:02:17.820 *********** 2025-07-06 20:19:01.128942 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:01.128949 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:01.128956 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:01.128962 | orchestrator | 2025-07-06 20:19:01.128969 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-06 20:19:01.128976 | orchestrator | Sunday 06 July 2025 20:18:10 +0000 (0:00:06.962) 0:02:24.783 *********** 2025-07-06 20:19:01.128983 | orchestrator | changed: [testbed-manager] 2025-07-06 20:19:01.128989 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:19:01.128996 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:01.129003 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:01.129009 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:01.129019 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:19:01.129026 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:19:01.129033 | orchestrator | 2025-07-06 20:19:01.129040 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-06 20:19:01.129046 | orchestrator | Sunday 06 July 2025 20:18:25 +0000 (0:00:15.669) 0:02:40.452 *********** 2025-07-06 20:19:01.129053 | orchestrator | changed: [testbed-manager] 2025-07-06 20:19:01.129060 | orchestrator | 2025-07-06 20:19:01.129067 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-06 20:19:01.129073 | orchestrator | Sunday 06 July 2025 20:18:33 +0000 (0:00:07.421) 0:02:47.873 *********** 2025-07-06 20:19:01.129080 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:01.129087 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:01.129093 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:01.129100 | orchestrator | 2025-07-06 20:19:01.129107 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-06 20:19:01.129113 | orchestrator | Sunday 06 July 2025 20:18:38 +0000 (0:00:05.821) 0:02:53.694 *********** 2025-07-06 20:19:01.129120 | orchestrator | changed: [testbed-manager] 2025-07-06 20:19:01.129127 | orchestrator | 2025-07-06 20:19:01.129144 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-06 20:19:01.129152 | orchestrator | Sunday 06 July 2025 20:18:44 +0000 (0:00:05.152) 0:02:58.847 *********** 2025-07-06 20:19:01.129159 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:19:01.129166 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:19:01.129178 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:19:01.129184 | orchestrator | 2025-07-06 20:19:01.129191 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:19:01.129201 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:19:01.129208 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:19:01.129215 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:19:01.129222 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:19:01.129229 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:19:01.129236 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:19:01.129243 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:19:01.129250 | orchestrator | 2025-07-06 20:19:01.129256 | orchestrator | 2025-07-06 20:19:01.129263 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:19:01.129270 | orchestrator | Sunday 06 July 2025 20:18:57 +0000 (0:00:13.584) 0:03:12.432 *********** 2025-07-06 20:19:01.129277 | orchestrator | =============================================================================== 2025-07-06 20:19:01.129283 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.75s 2025-07-06 20:19:01.129290 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.07s 2025-07-06 20:19:01.129297 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.08s 2025-07-06 20:19:01.129303 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.67s 2025-07-06 20:19:01.129310 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.37s 2025-07-06 20:19:01.129317 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.58s 2025-07-06 20:19:01.129323 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.52s 2025-07-06 20:19:01.129330 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.42s 2025-07-06 20:19:01.129337 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.96s 2025-07-06 20:19:01.129344 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.80s 2025-07-06 20:19:01.129350 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.02s 2025-07-06 20:19:01.129357 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.82s 2025-07-06 20:19:01.129364 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.15s 2025-07-06 20:19:01.129370 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.83s 2025-07-06 20:19:01.129377 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.88s 2025-07-06 20:19:01.129384 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.61s 2025-07-06 20:19:01.129391 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.45s 2025-07-06 20:19:01.129397 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.45s 2025-07-06 20:19:01.129404 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.30s 2025-07-06 20:19:01.129415 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.23s 2025-07-06 20:19:01.129426 | orchestrator | 2025-07-06 20:19:01 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:01.129433 | orchestrator | 2025-07-06 20:19:01 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:01.129439 | orchestrator | 2025-07-06 20:19:01 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:01.129446 | orchestrator | 2025-07-06 20:19:01 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:01.129453 | orchestrator | 2025-07-06 20:19:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:04.175216 | orchestrator | 2025-07-06 20:19:04 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:04.177659 | orchestrator | 2025-07-06 20:19:04 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:04.179425 | orchestrator | 2025-07-06 20:19:04 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:04.181817 | orchestrator | 2025-07-06 20:19:04 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:04.181870 | orchestrator | 2025-07-06 20:19:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:07.226127 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:07.228763 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:07.230643 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:07.232630 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:07.232655 | orchestrator | 2025-07-06 20:19:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:10.274639 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:10.276489 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:10.278304 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:10.279997 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:10.280108 | orchestrator | 2025-07-06 20:19:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:13.321998 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:13.325017 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:13.327230 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:13.328555 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:13.328758 | orchestrator | 2025-07-06 20:19:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:16.374345 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:16.375519 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:16.377094 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:16.378566 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:16.378612 | orchestrator | 2025-07-06 20:19:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:19.422810 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:19.425203 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:19.427708 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:19.429953 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:19.429981 | orchestrator | 2025-07-06 20:19:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:22.467654 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:22.469701 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:22.471412 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:22.473038 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:22.473062 | orchestrator | 2025-07-06 20:19:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:25.523746 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:25.525356 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:25.527482 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:25.529077 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:25.529178 | orchestrator | 2025-07-06 20:19:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:28.569462 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:28.569548 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:28.569559 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:28.569568 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:28.569577 | orchestrator | 2025-07-06 20:19:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:31.613780 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:31.614389 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:31.616146 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:31.616563 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:31.616593 | orchestrator | 2025-07-06 20:19:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:34.654441 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:34.655957 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:34.656661 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:34.657449 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:34.657615 | orchestrator | 2025-07-06 20:19:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:37.702402 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:37.704346 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:37.704386 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:37.705514 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:37.705679 | orchestrator | 2025-07-06 20:19:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:40.738656 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:40.741037 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:40.741384 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:40.745382 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:40.745427 | orchestrator | 2025-07-06 20:19:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:43.789518 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:43.791057 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:43.792426 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:43.793451 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:43.793528 | orchestrator | 2025-07-06 20:19:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:46.828107 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:46.829703 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:46.830679 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:46.831929 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:46.832162 | orchestrator | 2025-07-06 20:19:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:49.855064 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:49.855229 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:49.855682 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:49.856181 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:49.856209 | orchestrator | 2025-07-06 20:19:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:52.880940 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:52.883812 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:52.883909 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:52.884796 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:52.884827 | orchestrator | 2025-07-06 20:19:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:55.925885 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:55.926625 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:55.927319 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:55.927917 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:55.927944 | orchestrator | 2025-07-06 20:19:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:58.954527 | orchestrator | 2025-07-06 20:19:58 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:19:58.958164 | orchestrator | 2025-07-06 20:19:58 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:19:58.958651 | orchestrator | 2025-07-06 20:19:58 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:19:58.959373 | orchestrator | 2025-07-06 20:19:58 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:19:58.959999 | orchestrator | 2025-07-06 20:19:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:01.991568 | orchestrator | 2025-07-06 20:20:01 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:01.991928 | orchestrator | 2025-07-06 20:20:01 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:20:01.992534 | orchestrator | 2025-07-06 20:20:01 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:01.993429 | orchestrator | 2025-07-06 20:20:01 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:01.993484 | orchestrator | 2025-07-06 20:20:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:05.018271 | orchestrator | 2025-07-06 20:20:05 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:05.018576 | orchestrator | 2025-07-06 20:20:05 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:20:05.019382 | orchestrator | 2025-07-06 20:20:05 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:05.020923 | orchestrator | 2025-07-06 20:20:05 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:05.020946 | orchestrator | 2025-07-06 20:20:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:08.057069 | orchestrator | 2025-07-06 20:20:08 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:08.057211 | orchestrator | 2025-07-06 20:20:08 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:20:08.057607 | orchestrator | 2025-07-06 20:20:08 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:08.058860 | orchestrator | 2025-07-06 20:20:08 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:08.058889 | orchestrator | 2025-07-06 20:20:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:11.083770 | orchestrator | 2025-07-06 20:20:11 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:11.083962 | orchestrator | 2025-07-06 20:20:11 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state STARTED 2025-07-06 20:20:11.084646 | orchestrator | 2025-07-06 20:20:11 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:11.085335 | orchestrator | 2025-07-06 20:20:11 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:11.085362 | orchestrator | 2025-07-06 20:20:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:14.112869 | orchestrator | 2025-07-06 20:20:14 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:14.117808 | orchestrator | 2025-07-06 20:20:14 | INFO  | Task 41046f57-2951-43d4-ab25-c9e70f03a09f is in state SUCCESS 2025-07-06 20:20:14.122397 | orchestrator | 2025-07-06 20:20:14.122474 | orchestrator | 2025-07-06 20:20:14.122490 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:20:14.122503 | orchestrator | 2025-07-06 20:20:14.122582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:20:14.122598 | orchestrator | Sunday 06 July 2025 20:16:11 +0000 (0:00:00.255) 0:00:00.255 *********** 2025-07-06 20:20:14.122610 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:14.122622 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:14.122633 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:14.122645 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:20:14.122656 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:20:14.122667 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:20:14.122678 | orchestrator | 2025-07-06 20:20:14.122689 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:20:14.122701 | orchestrator | Sunday 06 July 2025 20:16:12 +0000 (0:00:00.909) 0:00:01.165 *********** 2025-07-06 20:20:14.122712 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-06 20:20:14.122756 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-06 20:20:14.122770 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-06 20:20:14.122781 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-06 20:20:14.122792 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-06 20:20:14.122803 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-06 20:20:14.122814 | orchestrator | 2025-07-06 20:20:14.122826 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-06 20:20:14.122837 | orchestrator | 2025-07-06 20:20:14.122848 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:20:14.122859 | orchestrator | Sunday 06 July 2025 20:16:14 +0000 (0:00:01.868) 0:00:03.034 *********** 2025-07-06 20:20:14.122893 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:20:14.122906 | orchestrator | 2025-07-06 20:20:14.123039 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-06 20:20:14.123056 | orchestrator | Sunday 06 July 2025 20:16:17 +0000 (0:00:02.985) 0:00:06.019 *********** 2025-07-06 20:20:14.123070 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-06 20:20:14.123143 | orchestrator | 2025-07-06 20:20:14.123160 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-06 20:20:14.123172 | orchestrator | Sunday 06 July 2025 20:16:20 +0000 (0:00:03.449) 0:00:09.468 *********** 2025-07-06 20:20:14.123186 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-06 20:20:14.123226 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-06 20:20:14.123270 | orchestrator | 2025-07-06 20:20:14.123284 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-06 20:20:14.123297 | orchestrator | Sunday 06 July 2025 20:16:27 +0000 (0:00:06.989) 0:00:16.459 *********** 2025-07-06 20:20:14.123310 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:20:14.123323 | orchestrator | 2025-07-06 20:20:14.123334 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-06 20:20:14.123345 | orchestrator | Sunday 06 July 2025 20:16:31 +0000 (0:00:03.248) 0:00:19.707 *********** 2025-07-06 20:20:14.123356 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:20:14.123367 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-06 20:20:14.123378 | orchestrator | 2025-07-06 20:20:14.123389 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-06 20:20:14.123400 | orchestrator | Sunday 06 July 2025 20:16:35 +0000 (0:00:03.869) 0:00:23.577 *********** 2025-07-06 20:20:14.123411 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:20:14.123422 | orchestrator | 2025-07-06 20:20:14.123433 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-06 20:20:14.123444 | orchestrator | Sunday 06 July 2025 20:16:38 +0000 (0:00:03.784) 0:00:27.361 *********** 2025-07-06 20:20:14.123454 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-06 20:20:14.123465 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-06 20:20:14.123476 | orchestrator | 2025-07-06 20:20:14.123487 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-06 20:20:14.123498 | orchestrator | Sunday 06 July 2025 20:16:47 +0000 (0:00:09.095) 0:00:36.457 *********** 2025-07-06 20:20:14.123543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.123560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.123572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.123594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123706 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.123755 | orchestrator | 2025-07-06 20:20:14.123767 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:20:14.123778 | orchestrator | Sunday 06 July 2025 20:16:50 +0000 (0:00:02.740) 0:00:39.198 *********** 2025-07-06 20:20:14.123789 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.123801 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.123811 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.123822 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.123833 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.123843 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.123864 | orchestrator | 2025-07-06 20:20:14.123875 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:20:14.123886 | orchestrator | Sunday 06 July 2025 20:16:51 +0000 (0:00:00.573) 0:00:39.771 *********** 2025-07-06 20:20:14.123896 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.123907 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.123918 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.123928 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:20:14.123939 | orchestrator | 2025-07-06 20:20:14.123950 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-06 20:20:14.123961 | orchestrator | Sunday 06 July 2025 20:16:52 +0000 (0:00:00.939) 0:00:40.711 *********** 2025-07-06 20:20:14.123972 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-06 20:20:14.123983 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-06 20:20:14.123994 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-06 20:20:14.124005 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-06 20:20:14.124016 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-06 20:20:14.124027 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-06 20:20:14.124037 | orchestrator | 2025-07-06 20:20:14.124048 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-06 20:20:14.124060 | orchestrator | Sunday 06 July 2025 20:16:54 +0000 (0:00:02.007) 0:00:42.719 *********** 2025-07-06 20:20:14.124072 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:20:14.124085 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:20:14.124108 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:20:14.124167 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:20:14.124181 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:20:14.124192 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:20:14.124204 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:20:14.124229 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:20:14.124248 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:20:14.124260 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:20:14.124272 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:20:14.124283 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:20:14.124299 | orchestrator | 2025-07-06 20:20:14.124311 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-06 20:20:14.124322 | orchestrator | Sunday 06 July 2025 20:16:57 +0000 (0:00:03.314) 0:00:46.033 *********** 2025-07-06 20:20:14.124333 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:20:14.124352 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:20:14.124363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:20:14.124374 | orchestrator | 2025-07-06 20:20:14.124385 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-06 20:20:14.124396 | orchestrator | Sunday 06 July 2025 20:16:59 +0000 (0:00:02.314) 0:00:48.348 *********** 2025-07-06 20:20:14.124413 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-06 20:20:14.124425 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-06 20:20:14.124436 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-06 20:20:14.124447 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:20:14.124458 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:20:14.124469 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:20:14.124480 | orchestrator | 2025-07-06 20:20:14.124491 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-06 20:20:14.124502 | orchestrator | Sunday 06 July 2025 20:17:02 +0000 (0:00:03.066) 0:00:51.414 *********** 2025-07-06 20:20:14.124512 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-06 20:20:14.124523 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-06 20:20:14.124534 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-06 20:20:14.124545 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-06 20:20:14.124556 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-06 20:20:14.124567 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-06 20:20:14.124577 | orchestrator | 2025-07-06 20:20:14.124588 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-06 20:20:14.124599 | orchestrator | Sunday 06 July 2025 20:17:03 +0000 (0:00:01.029) 0:00:52.444 *********** 2025-07-06 20:20:14.124610 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.124621 | orchestrator | 2025-07-06 20:20:14.124632 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-06 20:20:14.124643 | orchestrator | Sunday 06 July 2025 20:17:04 +0000 (0:00:00.248) 0:00:52.692 *********** 2025-07-06 20:20:14.124654 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.124664 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.124675 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.124686 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.124696 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.124709 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.124726 | orchestrator | 2025-07-06 20:20:14.124745 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:20:14.124763 | orchestrator | Sunday 06 July 2025 20:17:05 +0000 (0:00:00.838) 0:00:53.532 *********** 2025-07-06 20:20:14.124783 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:20:14.124803 | orchestrator | 2025-07-06 20:20:14.124815 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-06 20:20:14.124826 | orchestrator | Sunday 06 July 2025 20:17:06 +0000 (0:00:01.662) 0:00:55.195 *********** 2025-07-06 20:20:14.124838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.124870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.124892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.124904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.124915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.124934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.124950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.125575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.125603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.125615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.125627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.125648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.125659 | orchestrator | 2025-07-06 20:20:14.125670 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-06 20:20:14.125686 | orchestrator | Sunday 06 July 2025 20:17:10 +0000 (0:00:03.500) 0:00:58.696 *********** 2025-07-06 20:20:14.125720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.125738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.125769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.125810 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.125832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125849 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.125865 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.125893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125930 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.125947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.125976 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.125992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126076 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.126087 | orchestrator | 2025-07-06 20:20:14.126097 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-06 20:20:14.126107 | orchestrator | Sunday 06 July 2025 20:17:11 +0000 (0:00:01.424) 0:01:00.120 *********** 2025-07-06 20:20:14.126142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.126162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126174 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.126185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126213 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.126231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.126243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126261 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.126272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.126283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126294 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.126310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126341 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.126353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.126381 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.126392 | orchestrator | 2025-07-06 20:20:14.126404 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-06 20:20:14.126415 | orchestrator | Sunday 06 July 2025 20:17:13 +0000 (0:00:01.882) 0:01:02.003 *********** 2025-07-06 20:20:14.126426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.126442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.126461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.126487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126602 | orchestrator | 2025-07-06 20:20:14.126613 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-06 20:20:14.126622 | orchestrator | Sunday 06 July 2025 20:17:17 +0000 (0:00:03.708) 0:01:05.712 *********** 2025-07-06 20:20:14.126637 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-06 20:20:14.126647 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.126657 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-06 20:20:14.126667 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.126676 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-06 20:20:14.126686 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.126696 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-06 20:20:14.126706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-06 20:20:14.126721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-06 20:20:14.126731 | orchestrator | 2025-07-06 20:20:14.126741 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-06 20:20:14.126756 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:02.309) 0:01:08.022 *********** 2025-07-06 20:20:14.126767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.126777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.126787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.126802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.126927 | orchestrator | 2025-07-06 20:20:14.126937 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-06 20:20:14.126947 | orchestrator | Sunday 06 July 2025 20:17:29 +0000 (0:00:09.923) 0:01:17.945 *********** 2025-07-06 20:20:14.126956 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.126966 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.126976 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.126985 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:20:14.126995 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:20:14.127004 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:20:14.127014 | orchestrator | 2025-07-06 20:20:14.127024 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-06 20:20:14.127033 | orchestrator | Sunday 06 July 2025 20:17:31 +0000 (0:00:02.185) 0:01:20.131 *********** 2025-07-06 20:20:14.127043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.127054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.127092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127102 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.127135 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.127147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127168 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.127178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:20:14.127237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127257 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.127267 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.127277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:20:14.127303 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.127313 | orchestrator | 2025-07-06 20:20:14.127323 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-06 20:20:14.127333 | orchestrator | Sunday 06 July 2025 20:17:32 +0000 (0:00:01.196) 0:01:21.327 *********** 2025-07-06 20:20:14.127343 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.127353 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.127362 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.127376 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.127387 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.127396 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.127406 | orchestrator | 2025-07-06 20:20:14.127416 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-06 20:20:14.127426 | orchestrator | Sunday 06 July 2025 20:17:33 +0000 (0:00:00.644) 0:01:21.972 *********** 2025-07-06 20:20:14.127443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.127454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.127464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:14.127475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:14.127605 | orchestrator | 2025-07-06 20:20:14.127615 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:20:14.127625 | orchestrator | Sunday 06 July 2025 20:17:35 +0000 (0:00:02.156) 0:01:24.128 *********** 2025-07-06 20:20:14.127635 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.127645 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:14.127654 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:14.127664 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:20:14.127674 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:20:14.127683 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:20:14.127693 | orchestrator | 2025-07-06 20:20:14.127703 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-06 20:20:14.127712 | orchestrator | Sunday 06 July 2025 20:17:36 +0000 (0:00:00.721) 0:01:24.849 *********** 2025-07-06 20:20:14.127722 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:14.127731 | orchestrator | 2025-07-06 20:20:14.127741 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-06 20:20:14.127751 | orchestrator | Sunday 06 July 2025 20:17:38 +0000 (0:00:02.017) 0:01:26.867 *********** 2025-07-06 20:20:14.127761 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:14.127771 | orchestrator | 2025-07-06 20:20:14.127780 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-06 20:20:14.127790 | orchestrator | Sunday 06 July 2025 20:17:40 +0000 (0:00:02.618) 0:01:29.486 *********** 2025-07-06 20:20:14.127800 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:14.127813 | orchestrator | 2025-07-06 20:20:14.127823 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:20:14.127833 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:22.072) 0:01:51.559 *********** 2025-07-06 20:20:14.127843 | orchestrator | 2025-07-06 20:20:14.127852 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:20:14.127862 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:00.070) 0:01:51.629 *********** 2025-07-06 20:20:14.127872 | orchestrator | 2025-07-06 20:20:14.127881 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:20:14.127891 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:00.117) 0:01:51.747 *********** 2025-07-06 20:20:14.127901 | orchestrator | 2025-07-06 20:20:14.127910 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:20:14.127920 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:00.067) 0:01:51.814 *********** 2025-07-06 20:20:14.127930 | orchestrator | 2025-07-06 20:20:14.127940 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:20:14.127949 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:00.062) 0:01:51.877 *********** 2025-07-06 20:20:14.127959 | orchestrator | 2025-07-06 20:20:14.127969 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:20:14.127978 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:00.064) 0:01:51.941 *********** 2025-07-06 20:20:14.127988 | orchestrator | 2025-07-06 20:20:14.127997 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-06 20:20:14.128007 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:00.060) 0:01:52.001 *********** 2025-07-06 20:20:14.128017 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:14.128026 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:14.128036 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:14.128046 | orchestrator | 2025-07-06 20:20:14.128055 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-06 20:20:14.128065 | orchestrator | Sunday 06 July 2025 20:18:31 +0000 (0:00:28.271) 0:02:20.273 *********** 2025-07-06 20:20:14.128075 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:14.128084 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:14.128094 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:14.128104 | orchestrator | 2025-07-06 20:20:14.128171 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-06 20:20:14.128183 | orchestrator | Sunday 06 July 2025 20:18:43 +0000 (0:00:11.551) 0:02:31.825 *********** 2025-07-06 20:20:14.128193 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:20:14.128203 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:20:14.128218 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:20:14.128228 | orchestrator | 2025-07-06 20:20:14.128237 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-06 20:20:14.128247 | orchestrator | Sunday 06 July 2025 20:20:05 +0000 (0:01:21.700) 0:03:53.525 *********** 2025-07-06 20:20:14.128257 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:20:14.128267 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:20:14.128276 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:20:14.128286 | orchestrator | 2025-07-06 20:20:14.128296 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-06 20:20:14.128305 | orchestrator | Sunday 06 July 2025 20:20:12 +0000 (0:00:07.746) 0:04:01.272 *********** 2025-07-06 20:20:14.128315 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:14.128325 | orchestrator | 2025-07-06 20:20:14.128334 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:20:14.128351 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:20:14.128362 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:20:14.128379 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:20:14.128389 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:20:14.128399 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:20:14.128407 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:20:14.128415 | orchestrator | 2025-07-06 20:20:14.128422 | orchestrator | 2025-07-06 20:20:14.128431 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:20:14.128439 | orchestrator | Sunday 06 July 2025 20:20:13 +0000 (0:00:00.664) 0:04:01.937 *********** 2025-07-06 20:20:14.128447 | orchestrator | =============================================================================== 2025-07-06 20:20:14.128455 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 81.70s 2025-07-06 20:20:14.128463 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.27s 2025-07-06 20:20:14.128471 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.07s 2025-07-06 20:20:14.128479 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.55s 2025-07-06 20:20:14.128487 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.92s 2025-07-06 20:20:14.128495 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.10s 2025-07-06 20:20:14.128503 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.75s 2025-07-06 20:20:14.128511 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.99s 2025-07-06 20:20:14.128522 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.87s 2025-07-06 20:20:14.128530 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.78s 2025-07-06 20:20:14.128538 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.71s 2025-07-06 20:20:14.128546 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.50s 2025-07-06 20:20:14.128554 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.45s 2025-07-06 20:20:14.128562 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.31s 2025-07-06 20:20:14.128570 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.25s 2025-07-06 20:20:14.128578 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.07s 2025-07-06 20:20:14.128586 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.99s 2025-07-06 20:20:14.128606 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.74s 2025-07-06 20:20:14.128614 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.62s 2025-07-06 20:20:14.128622 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.32s 2025-07-06 20:20:14.128640 | orchestrator | 2025-07-06 20:20:14 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:14.128648 | orchestrator | 2025-07-06 20:20:14 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:14.128657 | orchestrator | 2025-07-06 20:20:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:17.147609 | orchestrator | 2025-07-06 20:20:17 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:17.147690 | orchestrator | 2025-07-06 20:20:17 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:17.147825 | orchestrator | 2025-07-06 20:20:17 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:17.148681 | orchestrator | 2025-07-06 20:20:17 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:17.148712 | orchestrator | 2025-07-06 20:20:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:20.173876 | orchestrator | 2025-07-06 20:20:20 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:20.174000 | orchestrator | 2025-07-06 20:20:20 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:20.174460 | orchestrator | 2025-07-06 20:20:20 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:20.175194 | orchestrator | 2025-07-06 20:20:20 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:20.175219 | orchestrator | 2025-07-06 20:20:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:23.215742 | orchestrator | 2025-07-06 20:20:23 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:23.215916 | orchestrator | 2025-07-06 20:20:23 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:23.216391 | orchestrator | 2025-07-06 20:20:23 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:23.217037 | orchestrator | 2025-07-06 20:20:23 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:23.217065 | orchestrator | 2025-07-06 20:20:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:26.253580 | orchestrator | 2025-07-06 20:20:26 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:26.254149 | orchestrator | 2025-07-06 20:20:26 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:26.256151 | orchestrator | 2025-07-06 20:20:26 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:26.256958 | orchestrator | 2025-07-06 20:20:26 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:26.257001 | orchestrator | 2025-07-06 20:20:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:29.301249 | orchestrator | 2025-07-06 20:20:29 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:29.301351 | orchestrator | 2025-07-06 20:20:29 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:29.305221 | orchestrator | 2025-07-06 20:20:29 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:29.305984 | orchestrator | 2025-07-06 20:20:29 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:29.306009 | orchestrator | 2025-07-06 20:20:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:32.339592 | orchestrator | 2025-07-06 20:20:32 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:32.339926 | orchestrator | 2025-07-06 20:20:32 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:32.339962 | orchestrator | 2025-07-06 20:20:32 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:32.340819 | orchestrator | 2025-07-06 20:20:32 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:32.340845 | orchestrator | 2025-07-06 20:20:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:35.370990 | orchestrator | 2025-07-06 20:20:35 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:35.371169 | orchestrator | 2025-07-06 20:20:35 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:35.372198 | orchestrator | 2025-07-06 20:20:35 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:35.372310 | orchestrator | 2025-07-06 20:20:35 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:35.372326 | orchestrator | 2025-07-06 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:38.400876 | orchestrator | 2025-07-06 20:20:38 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:38.400959 | orchestrator | 2025-07-06 20:20:38 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:38.401986 | orchestrator | 2025-07-06 20:20:38 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:38.402506 | orchestrator | 2025-07-06 20:20:38 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:38.402647 | orchestrator | 2025-07-06 20:20:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:41.445687 | orchestrator | 2025-07-06 20:20:41 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:41.445878 | orchestrator | 2025-07-06 20:20:41 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:41.447482 | orchestrator | 2025-07-06 20:20:41 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:41.447861 | orchestrator | 2025-07-06 20:20:41 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:41.447893 | orchestrator | 2025-07-06 20:20:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:44.468653 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:44.468768 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:44.469147 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:44.470255 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:44.470347 | orchestrator | 2025-07-06 20:20:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:47.505668 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:47.506340 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:47.508101 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:47.508807 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:47.508835 | orchestrator | 2025-07-06 20:20:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:50.533449 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:50.533575 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:50.534301 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:50.535503 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:50.535555 | orchestrator | 2025-07-06 20:20:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:53.562827 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:53.562924 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:53.563717 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:53.564578 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state STARTED 2025-07-06 20:20:53.564601 | orchestrator | 2025-07-06 20:20:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:56.600174 | orchestrator | 2025-07-06 20:20:56.600250 | orchestrator | 2025-07-06 20:20:56.600257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:20:56.600261 | orchestrator | 2025-07-06 20:20:56.600266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:20:56.600270 | orchestrator | Sunday 06 July 2025 20:19:02 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-07-06 20:20:56.600274 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:56.600281 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:56.600284 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:56.600288 | orchestrator | 2025-07-06 20:20:56.600292 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:20:56.600296 | orchestrator | Sunday 06 July 2025 20:19:02 +0000 (0:00:00.287) 0:00:00.551 *********** 2025-07-06 20:20:56.600301 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-06 20:20:56.600305 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-06 20:20:56.600309 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-06 20:20:56.600313 | orchestrator | 2025-07-06 20:20:56.600318 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-06 20:20:56.600322 | orchestrator | 2025-07-06 20:20:56.600326 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-06 20:20:56.600330 | orchestrator | Sunday 06 July 2025 20:19:03 +0000 (0:00:00.422) 0:00:00.973 *********** 2025-07-06 20:20:56.600334 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:20:56.600338 | orchestrator | 2025-07-06 20:20:56.600353 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-06 20:20:56.600357 | orchestrator | Sunday 06 July 2025 20:19:03 +0000 (0:00:00.491) 0:00:01.464 *********** 2025-07-06 20:20:56.600361 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-06 20:20:56.600365 | orchestrator | 2025-07-06 20:20:56.600369 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-06 20:20:56.600373 | orchestrator | Sunday 06 July 2025 20:19:06 +0000 (0:00:03.216) 0:00:04.681 *********** 2025-07-06 20:20:56.600376 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-06 20:20:56.600381 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-06 20:20:56.600385 | orchestrator | 2025-07-06 20:20:56.600389 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-06 20:20:56.600392 | orchestrator | Sunday 06 July 2025 20:19:13 +0000 (0:00:06.286) 0:00:10.967 *********** 2025-07-06 20:20:56.600396 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:20:56.600400 | orchestrator | 2025-07-06 20:20:56.600404 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-06 20:20:56.600408 | orchestrator | Sunday 06 July 2025 20:19:16 +0000 (0:00:03.329) 0:00:14.297 *********** 2025-07-06 20:20:56.600412 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:20:56.600430 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-06 20:20:56.600434 | orchestrator | 2025-07-06 20:20:56.600438 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-06 20:20:56.600442 | orchestrator | Sunday 06 July 2025 20:19:20 +0000 (0:00:03.862) 0:00:18.159 *********** 2025-07-06 20:20:56.600446 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:20:56.600450 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-06 20:20:56.600453 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-06 20:20:56.600457 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-06 20:20:56.600461 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-06 20:20:56.600465 | orchestrator | 2025-07-06 20:20:56.600469 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-06 20:20:56.600473 | orchestrator | Sunday 06 July 2025 20:19:35 +0000 (0:00:15.241) 0:00:33.401 *********** 2025-07-06 20:20:56.600476 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-06 20:20:56.600480 | orchestrator | 2025-07-06 20:20:56.600484 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-06 20:20:56.600488 | orchestrator | Sunday 06 July 2025 20:19:40 +0000 (0:00:04.366) 0:00:37.768 *********** 2025-07-06 20:20:56.600494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600567 | orchestrator | 2025-07-06 20:20:56.600571 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-06 20:20:56.600575 | orchestrator | Sunday 06 July 2025 20:19:41 +0000 (0:00:01.841) 0:00:39.609 *********** 2025-07-06 20:20:56.600579 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-06 20:20:56.600582 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-06 20:20:56.600586 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-06 20:20:56.600590 | orchestrator | 2025-07-06 20:20:56.600594 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-06 20:20:56.600598 | orchestrator | Sunday 06 July 2025 20:19:43 +0000 (0:00:01.792) 0:00:41.402 *********** 2025-07-06 20:20:56.600602 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.600606 | orchestrator | 2025-07-06 20:20:56.600609 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-06 20:20:56.600613 | orchestrator | Sunday 06 July 2025 20:19:43 +0000 (0:00:00.131) 0:00:41.533 *********** 2025-07-06 20:20:56.600617 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.600621 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:56.600624 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:56.600628 | orchestrator | 2025-07-06 20:20:56.600632 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-06 20:20:56.600636 | orchestrator | Sunday 06 July 2025 20:19:44 +0000 (0:00:00.875) 0:00:42.408 *********** 2025-07-06 20:20:56.600640 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:20:56.600644 | orchestrator | 2025-07-06 20:20:56.600648 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-06 20:20:56.600651 | orchestrator | Sunday 06 July 2025 20:19:46 +0000 (0:00:01.396) 0:00:43.805 *********** 2025-07-06 20:20:56.600655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.600712 | orchestrator | 2025-07-06 20:20:56.600717 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-06 20:20:56.600721 | orchestrator | Sunday 06 July 2025 20:19:49 +0000 (0:00:03.262) 0:00:47.068 *********** 2025-07-06 20:20:56.600726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.600730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600740 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.600748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.600759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.600768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600778 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:56.600782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600787 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:56.600791 | orchestrator | 2025-07-06 20:20:56.600805 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-06 20:20:56.600810 | orchestrator | Sunday 06 July 2025 20:19:50 +0000 (0:00:00.813) 0:00:47.881 *********** 2025-07-06 20:20:56.600814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.600821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.600831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600853 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:56.600857 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.600862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.600867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.600876 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:56.600880 | orchestrator | 2025-07-06 20:20:56.600884 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-06 20:20:56.600889 | orchestrator | Sunday 06 July 2025 20:19:51 +0000 (0:00:01.362) 0:00:49.244 *********** 2025-07-06 20:20:56.600953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.600969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-ap2025-07-06 20:20:56 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:56.600975 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:56.600980 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:56.600984 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task 1b780698-12b2-4016-9fc0-52469cb821b7 is in state SUCCESS 2025-07-06 20:20:56.600988 | orchestrator | 2025-07-06 20:20:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:56.601819 | orchestrator | i:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.601837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.601842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601880 | orchestrator | 2025-07-06 20:20:56.601885 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-06 20:20:56.601889 | orchestrator | Sunday 06 July 2025 20:19:55 +0000 (0:00:03.641) 0:00:52.886 *********** 2025-07-06 20:20:56.601893 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.601897 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:56.601900 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:56.601904 | orchestrator | 2025-07-06 20:20:56.601908 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-06 20:20:56.601912 | orchestrator | Sunday 06 July 2025 20:19:57 +0000 (0:00:01.889) 0:00:54.776 *********** 2025-07-06 20:20:56.601915 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:20:56.601919 | orchestrator | 2025-07-06 20:20:56.601923 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-06 20:20:56.601927 | orchestrator | Sunday 06 July 2025 20:19:59 +0000 (0:00:02.076) 0:00:56.852 *********** 2025-07-06 20:20:56.601931 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:56.601935 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.601938 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:56.601945 | orchestrator | 2025-07-06 20:20:56.601949 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-06 20:20:56.601953 | orchestrator | Sunday 06 July 2025 20:20:00 +0000 (0:00:01.436) 0:00:58.289 *********** 2025-07-06 20:20:56.601957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.601964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.601970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.601975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.601997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602007 | orchestrator | 2025-07-06 20:20:56.602011 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-06 20:20:56.602074 | orchestrator | Sunday 06 July 2025 20:20:10 +0000 (0:00:09.629) 0:01:07.919 *********** 2025-07-06 20:20:56.602080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.602087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.602091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.602095 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.602103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.602125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.602129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.602133 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:56.602137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:20:56.602144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.602148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:20:56.602152 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:56.602156 | orchestrator | 2025-07-06 20:20:56.602160 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-06 20:20:56.602164 | orchestrator | Sunday 06 July 2025 20:20:10 +0000 (0:00:00.717) 0:01:08.637 *********** 2025-07-06 20:20:56.602173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.602178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.602185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:20:56.602189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:20:56.602224 | orchestrator | 2025-07-06 20:20:56.602228 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-06 20:20:56.602232 | orchestrator | Sunday 06 July 2025 20:20:14 +0000 (0:00:03.386) 0:01:12.023 *********** 2025-07-06 20:20:56.602236 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:56.602240 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:56.602244 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:56.602247 | orchestrator | 2025-07-06 20:20:56.602251 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-06 20:20:56.602255 | orchestrator | Sunday 06 July 2025 20:20:14 +0000 (0:00:00.420) 0:01:12.443 *********** 2025-07-06 20:20:56.602259 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.602262 | orchestrator | 2025-07-06 20:20:56.602266 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-06 20:20:56.602270 | orchestrator | Sunday 06 July 2025 20:20:16 +0000 (0:00:02.151) 0:01:14.595 *********** 2025-07-06 20:20:56.602273 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.602277 | orchestrator | 2025-07-06 20:20:56.602281 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-06 20:20:56.602285 | orchestrator | Sunday 06 July 2025 20:20:19 +0000 (0:00:02.529) 0:01:17.125 *********** 2025-07-06 20:20:56.602288 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.602292 | orchestrator | 2025-07-06 20:20:56.602296 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-06 20:20:56.602300 | orchestrator | Sunday 06 July 2025 20:20:31 +0000 (0:00:11.710) 0:01:28.835 *********** 2025-07-06 20:20:56.602303 | orchestrator | 2025-07-06 20:20:56.602307 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-06 20:20:56.602311 | orchestrator | Sunday 06 July 2025 20:20:31 +0000 (0:00:00.132) 0:01:28.968 *********** 2025-07-06 20:20:56.602315 | orchestrator | 2025-07-06 20:20:56.602318 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-06 20:20:56.602322 | orchestrator | Sunday 06 July 2025 20:20:31 +0000 (0:00:00.133) 0:01:29.101 *********** 2025-07-06 20:20:56.602326 | orchestrator | 2025-07-06 20:20:56.602330 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-06 20:20:56.602333 | orchestrator | Sunday 06 July 2025 20:20:31 +0000 (0:00:00.070) 0:01:29.172 *********** 2025-07-06 20:20:56.602337 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:56.602341 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:56.602344 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.602348 | orchestrator | 2025-07-06 20:20:56.602352 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-06 20:20:56.602356 | orchestrator | Sunday 06 July 2025 20:20:39 +0000 (0:00:08.177) 0:01:37.349 *********** 2025-07-06 20:20:56.602359 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:56.602366 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:56.602372 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.602376 | orchestrator | 2025-07-06 20:20:56.602380 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-06 20:20:56.602384 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:08.362) 0:01:45.711 *********** 2025-07-06 20:20:56.602388 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:56.602391 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:56.602395 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:56.602399 | orchestrator | 2025-07-06 20:20:56.602402 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:20:56.602410 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:20:56.602414 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:20:56.602418 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:20:56.602422 | orchestrator | 2025-07-06 20:20:56.602426 | orchestrator | 2025-07-06 20:20:56.602430 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:20:56.602435 | orchestrator | Sunday 06 July 2025 20:20:55 +0000 (0:00:07.550) 0:01:53.262 *********** 2025-07-06 20:20:56.602442 | orchestrator | =============================================================================== 2025-07-06 20:20:56.602448 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.24s 2025-07-06 20:20:56.602452 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.71s 2025-07-06 20:20:56.602456 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.63s 2025-07-06 20:20:56.602460 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.36s 2025-07-06 20:20:56.602465 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.18s 2025-07-06 20:20:56.602469 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.55s 2025-07-06 20:20:56.602474 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.29s 2025-07-06 20:20:56.602478 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.37s 2025-07-06 20:20:56.602482 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.86s 2025-07-06 20:20:56.602486 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.64s 2025-07-06 20:20:56.602491 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.39s 2025-07-06 20:20:56.602495 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.33s 2025-07-06 20:20:56.602499 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.26s 2025-07-06 20:20:56.602503 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.22s 2025-07-06 20:20:56.602507 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.53s 2025-07-06 20:20:56.602512 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.15s 2025-07-06 20:20:56.602516 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.08s 2025-07-06 20:20:56.602520 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.89s 2025-07-06 20:20:56.602525 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.84s 2025-07-06 20:20:56.602529 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.79s 2025-07-06 20:20:59.617487 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:20:59.617619 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:20:59.621436 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:20:59.621484 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:20:59.621491 | orchestrator | 2025-07-06 20:20:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:02.639706 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:02.639822 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:02.640274 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:02.640647 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:02.640666 | orchestrator | 2025-07-06 20:21:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:05.662507 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:05.662607 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:05.662946 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:05.663483 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:05.663508 | orchestrator | 2025-07-06 20:21:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:08.683407 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:08.683535 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:08.683924 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:08.684569 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:08.684593 | orchestrator | 2025-07-06 20:21:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:11.705840 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:11.706216 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:11.706801 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:11.707345 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:11.707372 | orchestrator | 2025-07-06 20:21:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:14.737426 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:14.737647 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:14.738826 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:14.739568 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:14.739597 | orchestrator | 2025-07-06 20:21:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:17.769200 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:17.771640 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:17.773769 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:17.775679 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:17.775728 | orchestrator | 2025-07-06 20:21:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:20.801365 | orchestrator | 2025-07-06 20:21:20 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:20.801478 | orchestrator | 2025-07-06 20:21:20 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:20.801990 | orchestrator | 2025-07-06 20:21:20 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:20.802632 | orchestrator | 2025-07-06 20:21:20 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:20.802658 | orchestrator | 2025-07-06 20:21:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:23.833011 | orchestrator | 2025-07-06 20:21:23 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:23.833177 | orchestrator | 2025-07-06 20:21:23 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:23.833981 | orchestrator | 2025-07-06 20:21:23 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:23.834880 | orchestrator | 2025-07-06 20:21:23 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:23.834928 | orchestrator | 2025-07-06 20:21:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:26.866762 | orchestrator | 2025-07-06 20:21:26 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:26.869092 | orchestrator | 2025-07-06 20:21:26 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:26.869835 | orchestrator | 2025-07-06 20:21:26 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:26.873024 | orchestrator | 2025-07-06 20:21:26 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:26.873099 | orchestrator | 2025-07-06 20:21:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:29.907666 | orchestrator | 2025-07-06 20:21:29 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:29.907982 | orchestrator | 2025-07-06 20:21:29 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:29.908774 | orchestrator | 2025-07-06 20:21:29 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:29.909544 | orchestrator | 2025-07-06 20:21:29 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:29.909570 | orchestrator | 2025-07-06 20:21:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:32.967775 | orchestrator | 2025-07-06 20:21:32 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:32.968612 | orchestrator | 2025-07-06 20:21:32 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:32.969585 | orchestrator | 2025-07-06 20:21:32 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:32.970691 | orchestrator | 2025-07-06 20:21:32 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:32.970720 | orchestrator | 2025-07-06 20:21:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:36.018309 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:36.018424 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:36.021942 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:36.022663 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:36.022791 | orchestrator | 2025-07-06 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:39.052094 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:39.052267 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:39.052366 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:39.052517 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:39.054014 | orchestrator | 2025-07-06 20:21:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:42.087798 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:42.089166 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:42.091050 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:42.092615 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:42.092655 | orchestrator | 2025-07-06 20:21:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:45.130707 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:45.131859 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:45.132596 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:45.133550 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:45.133719 | orchestrator | 2025-07-06 20:21:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:48.166131 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:48.167491 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:48.168399 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:48.170156 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:48.170183 | orchestrator | 2025-07-06 20:21:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:51.200492 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:51.202793 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:51.204689 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:51.207143 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:51.207421 | orchestrator | 2025-07-06 20:21:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:54.243354 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:54.243483 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:54.245491 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:54.246179 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:54.246210 | orchestrator | 2025-07-06 20:21:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:57.270332 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:21:57.270442 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:21:57.271148 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:21:57.271555 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:21:57.271725 | orchestrator | 2025-07-06 20:21:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:00.308019 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:00.308530 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:00.308964 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:00.309748 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:00.309774 | orchestrator | 2025-07-06 20:22:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:03.335194 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:03.335365 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:03.336093 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:03.338447 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:03.338496 | orchestrator | 2025-07-06 20:22:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:06.376812 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:06.377445 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:06.378446 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:06.379542 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:06.379572 | orchestrator | 2025-07-06 20:22:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:09.414411 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:09.417732 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:09.417828 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:09.418554 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:09.418604 | orchestrator | 2025-07-06 20:22:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:12.465529 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:12.467992 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:12.469803 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:12.472362 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:12.472524 | orchestrator | 2025-07-06 20:22:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:15.521068 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:15.522384 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:15.525162 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:15.526623 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:15.526816 | orchestrator | 2025-07-06 20:22:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:18.564273 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:18.566424 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:18.568589 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:18.570601 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:18.570635 | orchestrator | 2025-07-06 20:22:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:21.613742 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:21.614975 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:21.617238 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:21.619724 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:21.619748 | orchestrator | 2025-07-06 20:22:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:24.648745 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:24.649860 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:24.649950 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:24.650266 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:24.650376 | orchestrator | 2025-07-06 20:22:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:27.678995 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:27.679090 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:27.679232 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:27.680078 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:27.680118 | orchestrator | 2025-07-06 20:22:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:30.712418 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:30.712614 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:30.713369 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:30.714326 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:30.714364 | orchestrator | 2025-07-06 20:22:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:33.750437 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:33.751236 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:33.752589 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:33.756409 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:33.757518 | orchestrator | 2025-07-06 20:22:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:36.801839 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:36.803752 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:36.805410 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:36.807040 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:36.807069 | orchestrator | 2025-07-06 20:22:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:39.851238 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:39.853157 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:39.854928 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:39.856544 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:39.856569 | orchestrator | 2025-07-06 20:22:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:42.896850 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:42.896954 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:42.897940 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:42.899903 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:42.899990 | orchestrator | 2025-07-06 20:22:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:45.954083 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:45.957508 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:45.960086 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:45.961446 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:45.961875 | orchestrator | 2025-07-06 20:22:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:48.998458 | orchestrator | 2025-07-06 20:22:48 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:49.000412 | orchestrator | 2025-07-06 20:22:48 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:49.005690 | orchestrator | 2025-07-06 20:22:49 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:49.006419 | orchestrator | 2025-07-06 20:22:49 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:49.006454 | orchestrator | 2025-07-06 20:22:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:52.072041 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:52.073579 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:52.075383 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:52.077316 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:52.077436 | orchestrator | 2025-07-06 20:22:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:55.116159 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:55.116825 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state STARTED 2025-07-06 20:22:55.118095 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:55.118529 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:55.118727 | orchestrator | 2025-07-06 20:22:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:58.165965 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:22:58.168325 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:22:58.172330 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task 47296d43-c54b-4266-8d7a-aece70a7ae6c is in state SUCCESS 2025-07-06 20:22:58.174441 | orchestrator | 2025-07-06 20:22:58.174567 | orchestrator | 2025-07-06 20:22:58.174583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:22:58.174595 | orchestrator | 2025-07-06 20:22:58.174607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:22:58.174641 | orchestrator | Sunday 06 July 2025 20:18:51 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-07-06 20:22:58.174658 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:58.174678 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:58.174698 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:58.174716 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:22:58.174735 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:22:58.174880 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:22:58.174910 | orchestrator | 2025-07-06 20:22:58.174922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:22:58.174933 | orchestrator | Sunday 06 July 2025 20:18:52 +0000 (0:00:00.721) 0:00:00.979 *********** 2025-07-06 20:22:58.174944 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-06 20:22:58.174956 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-06 20:22:58.174967 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-06 20:22:58.174979 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-06 20:22:58.174990 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-06 20:22:58.175001 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-06 20:22:58.175018 | orchestrator | 2025-07-06 20:22:58.175035 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-06 20:22:58.175048 | orchestrator | 2025-07-06 20:22:58.175061 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:22:58.175073 | orchestrator | Sunday 06 July 2025 20:18:52 +0000 (0:00:00.639) 0:00:01.618 *********** 2025-07-06 20:22:58.175087 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:22:58.175138 | orchestrator | 2025-07-06 20:22:58.175151 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-06 20:22:58.175164 | orchestrator | Sunday 06 July 2025 20:18:53 +0000 (0:00:01.104) 0:00:02.723 *********** 2025-07-06 20:22:58.175177 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:58.175189 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:22:58.175202 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:58.175215 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:58.175227 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:22:58.175240 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:22:58.175252 | orchestrator | 2025-07-06 20:22:58.175265 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-06 20:22:58.175278 | orchestrator | Sunday 06 July 2025 20:18:55 +0000 (0:00:01.242) 0:00:03.965 *********** 2025-07-06 20:22:58.175290 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:58.175303 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:58.175315 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:58.175327 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:22:58.175339 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:22:58.175351 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:22:58.175363 | orchestrator | 2025-07-06 20:22:58.175374 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-06 20:22:58.175385 | orchestrator | Sunday 06 July 2025 20:18:56 +0000 (0:00:01.157) 0:00:05.123 *********** 2025-07-06 20:22:58.175396 | orchestrator | ok: [testbed-node-0] => { 2025-07-06 20:22:58.175408 | orchestrator |  "changed": false, 2025-07-06 20:22:58.175420 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:22:58.175431 | orchestrator | } 2025-07-06 20:22:58.175442 | orchestrator | ok: [testbed-node-1] => { 2025-07-06 20:22:58.175453 | orchestrator |  "changed": false, 2025-07-06 20:22:58.175464 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:22:58.175474 | orchestrator | } 2025-07-06 20:22:58.175485 | orchestrator | ok: [testbed-node-2] => { 2025-07-06 20:22:58.175496 | orchestrator |  "changed": false, 2025-07-06 20:22:58.175507 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:22:58.175517 | orchestrator | } 2025-07-06 20:22:58.175528 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:22:58.175550 | orchestrator |  "changed": false, 2025-07-06 20:22:58.175561 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:22:58.175572 | orchestrator | } 2025-07-06 20:22:58.175583 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:22:58.175594 | orchestrator |  "changed": false, 2025-07-06 20:22:58.175604 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:22:58.175615 | orchestrator | } 2025-07-06 20:22:58.175626 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:22:58.175636 | orchestrator |  "changed": false, 2025-07-06 20:22:58.175647 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:22:58.175658 | orchestrator | } 2025-07-06 20:22:58.175669 | orchestrator | 2025-07-06 20:22:58.175680 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-06 20:22:58.175691 | orchestrator | Sunday 06 July 2025 20:18:57 +0000 (0:00:01.001) 0:00:06.124 *********** 2025-07-06 20:22:58.175702 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.175712 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.175723 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.175734 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.175745 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.175755 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.175766 | orchestrator | 2025-07-06 20:22:58.175777 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-06 20:22:58.175788 | orchestrator | Sunday 06 July 2025 20:18:58 +0000 (0:00:00.903) 0:00:07.027 *********** 2025-07-06 20:22:58.175812 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-06 20:22:58.175824 | orchestrator | 2025-07-06 20:22:58.175835 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-06 20:22:58.175846 | orchestrator | Sunday 06 July 2025 20:19:01 +0000 (0:00:03.868) 0:00:10.896 *********** 2025-07-06 20:22:58.175857 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-06 20:22:58.175870 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-06 20:22:58.175881 | orchestrator | 2025-07-06 20:22:58.175907 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-06 20:22:58.175919 | orchestrator | Sunday 06 July 2025 20:19:08 +0000 (0:00:06.346) 0:00:17.242 *********** 2025-07-06 20:22:58.175930 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:22:58.175941 | orchestrator | 2025-07-06 20:22:58.175952 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-06 20:22:58.175963 | orchestrator | Sunday 06 July 2025 20:19:11 +0000 (0:00:03.199) 0:00:20.442 *********** 2025-07-06 20:22:58.175974 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:22:58.175985 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-06 20:22:58.175996 | orchestrator | 2025-07-06 20:22:58.176006 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-06 20:22:58.176017 | orchestrator | Sunday 06 July 2025 20:19:15 +0000 (0:00:03.853) 0:00:24.296 *********** 2025-07-06 20:22:58.176028 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:22:58.176039 | orchestrator | 2025-07-06 20:22:58.176050 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-06 20:22:58.176061 | orchestrator | Sunday 06 July 2025 20:19:18 +0000 (0:00:03.449) 0:00:27.745 *********** 2025-07-06 20:22:58.176072 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-06 20:22:58.176082 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-06 20:22:58.176093 | orchestrator | 2025-07-06 20:22:58.176130 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:22:58.176141 | orchestrator | Sunday 06 July 2025 20:19:26 +0000 (0:00:07.614) 0:00:35.360 *********** 2025-07-06 20:22:58.176152 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.176170 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.176181 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.176192 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.176202 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.176213 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.176224 | orchestrator | 2025-07-06 20:22:58.176235 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-06 20:22:58.176245 | orchestrator | Sunday 06 July 2025 20:19:27 +0000 (0:00:00.731) 0:00:36.091 *********** 2025-07-06 20:22:58.176256 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.176267 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.176278 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.176288 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.176299 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.176310 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.176320 | orchestrator | 2025-07-06 20:22:58.176331 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-06 20:22:58.176342 | orchestrator | Sunday 06 July 2025 20:19:29 +0000 (0:00:01.919) 0:00:38.010 *********** 2025-07-06 20:22:58.176353 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:58.176364 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:22:58.176375 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:22:58.176386 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:22:58.176396 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:58.176407 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:58.176418 | orchestrator | 2025-07-06 20:22:58.176429 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-06 20:22:58.176440 | orchestrator | Sunday 06 July 2025 20:19:30 +0000 (0:00:01.632) 0:00:39.643 *********** 2025-07-06 20:22:58.176451 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.176462 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.176472 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.176483 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.176494 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.176505 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.176515 | orchestrator | 2025-07-06 20:22:58.176526 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-06 20:22:58.176537 | orchestrator | Sunday 06 July 2025 20:19:32 +0000 (0:00:01.942) 0:00:41.586 *********** 2025-07-06 20:22:58.176566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.176597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.176618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.176631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.176643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.176655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.176667 | orchestrator | 2025-07-06 20:22:58.176683 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-06 20:22:58.176694 | orchestrator | Sunday 06 July 2025 20:19:35 +0000 (0:00:02.787) 0:00:44.373 *********** 2025-07-06 20:22:58.176705 | orchestrator | [WARNING]: Skipped 2025-07-06 20:22:58.176717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-06 20:22:58.176729 | orchestrator | due to this access issue: 2025-07-06 20:22:58.176746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-06 20:22:58.176757 | orchestrator | a directory 2025-07-06 20:22:58.176768 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:22:58.176790 | orchestrator | 2025-07-06 20:22:58.176808 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:22:58.176820 | orchestrator | Sunday 06 July 2025 20:19:36 +0000 (0:00:00.862) 0:00:45.235 *********** 2025-07-06 20:22:58.176831 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:22:58.176843 | orchestrator | 2025-07-06 20:22:58.176855 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-06 20:22:58.176865 | orchestrator | Sunday 06 July 2025 20:19:37 +0000 (0:00:01.162) 0:00:46.397 *********** 2025-07-06 20:22:58.176877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.176889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.176901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.176918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.176945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.176957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.176968 | orchestrator | 2025-07-06 20:22:58.176979 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-06 20:22:58.176991 | orchestrator | Sunday 06 July 2025 20:19:40 +0000 (0:00:03.212) 0:00:49.610 *********** 2025-07-06 20:22:58.177002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177014 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.177025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177043 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.177065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177077 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.177089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177124 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.177136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177147 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.177159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177170 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.177181 | orchestrator | 2025-07-06 20:22:58.177192 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-06 20:22:58.177203 | orchestrator | Sunday 06 July 2025 20:19:43 +0000 (0:00:03.048) 0:00:52.659 *********** 2025-07-06 20:22:58.177219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177244 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.177263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177275 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.177286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177298 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.177309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177320 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.177332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177349 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.177365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177377 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.177388 | orchestrator | 2025-07-06 20:22:58.177399 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-06 20:22:58.177416 | orchestrator | Sunday 06 July 2025 20:19:47 +0000 (0:00:03.401) 0:00:56.060 *********** 2025-07-06 20:22:58.177427 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.177438 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.177449 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.177460 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.177471 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.177481 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.177492 | orchestrator | 2025-07-06 20:22:58.177503 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-06 20:22:58.177514 | orchestrator | Sunday 06 July 2025 20:19:49 +0000 (0:00:02.156) 0:00:58.216 *********** 2025-07-06 20:22:58.177525 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.177536 | orchestrator | 2025-07-06 20:22:58.177547 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-06 20:22:58.177558 | orchestrator | Sunday 06 July 2025 20:19:49 +0000 (0:00:00.096) 0:00:58.313 *********** 2025-07-06 20:22:58.177569 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.177579 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.177590 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.177601 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.177612 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.177622 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.177633 | orchestrator | 2025-07-06 20:22:58.177644 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-06 20:22:58.177655 | orchestrator | Sunday 06 July 2025 20:19:50 +0000 (0:00:00.664) 0:00:58.977 *********** 2025-07-06 20:22:58.177666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177684 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.177695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.177706 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.177722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.177734 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.178229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.178254 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.178266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178278 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.178290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178311 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.178323 | orchestrator | 2025-07-06 20:22:58.178334 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-06 20:22:58.178345 | orchestrator | Sunday 06 July 2025 20:19:52 +0000 (0:00:02.379) 0:01:01.356 *********** 2025-07-06 20:22:58.178357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.178410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.178429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.178453 | orchestrator | 2025-07-06 20:22:58.178465 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-06 20:22:58.178476 | orchestrator | Sunday 06 July 2025 20:19:56 +0000 (0:00:03.804) 0:01:05.161 *********** 2025-07-06 20:22:58.178507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.178561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.178583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.178595 | orchestrator | 2025-07-06 20:22:58.178606 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-06 20:22:58.178617 | orchestrator | Sunday 06 July 2025 20:20:02 +0000 (0:00:06.710) 0:01:11.871 *********** 2025-07-06 20:22:58.178628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178670 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.178686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178697 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.178717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178729 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.178741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.178758 | orchestrator | 2025-07-06 20:22:58.178770 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-06 20:22:58.178781 | orchestrator | Sunday 06 July 2025 20:20:07 +0000 (0:00:04.248) 0:01:16.120 *********** 2025-07-06 20:22:58.178792 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.178803 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.178814 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.178826 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:58.178838 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:58.178850 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:58.178862 | orchestrator | 2025-07-06 20:22:58.178875 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-06 20:22:58.178887 | orchestrator | Sunday 06 July 2025 20:20:10 +0000 (0:00:03.554) 0:01:19.675 *********** 2025-07-06 20:22:58.178900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178912 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.178925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178937 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.178961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.178985 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.178998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.179012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.179025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.179038 | orchestrator | 2025-07-06 20:22:58.179050 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-06 20:22:58.179062 | orchestrator | Sunday 06 July 2025 20:20:14 +0000 (0:00:03.719) 0:01:23.394 *********** 2025-07-06 20:22:58.179075 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179087 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179119 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179132 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179143 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179156 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179169 | orchestrator | 2025-07-06 20:22:58.179181 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-06 20:22:58.179192 | orchestrator | Sunday 06 July 2025 20:20:16 +0000 (0:00:02.076) 0:01:25.470 *********** 2025-07-06 20:22:58.179203 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179214 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179229 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179246 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179257 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179268 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179279 | orchestrator | 2025-07-06 20:22:58.179290 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-06 20:22:58.179301 | orchestrator | Sunday 06 July 2025 20:20:18 +0000 (0:00:02.075) 0:01:27.546 *********** 2025-07-06 20:22:58.179312 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179323 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179334 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179350 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179361 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179372 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179383 | orchestrator | 2025-07-06 20:22:58.179395 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-06 20:22:58.179406 | orchestrator | Sunday 06 July 2025 20:20:20 +0000 (0:00:02.115) 0:01:29.661 *********** 2025-07-06 20:22:58.179417 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179427 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179438 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179449 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179460 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179470 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179481 | orchestrator | 2025-07-06 20:22:58.179492 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-06 20:22:58.179503 | orchestrator | Sunday 06 July 2025 20:20:22 +0000 (0:00:01.971) 0:01:31.632 *********** 2025-07-06 20:22:58.179514 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179525 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179536 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179547 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179557 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179568 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179579 | orchestrator | 2025-07-06 20:22:58.179590 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-06 20:22:58.179601 | orchestrator | Sunday 06 July 2025 20:20:24 +0000 (0:00:02.175) 0:01:33.807 *********** 2025-07-06 20:22:58.179612 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179622 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179633 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179644 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179655 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179666 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179676 | orchestrator | 2025-07-06 20:22:58.179687 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-06 20:22:58.179698 | orchestrator | Sunday 06 July 2025 20:20:27 +0000 (0:00:02.313) 0:01:36.121 *********** 2025-07-06 20:22:58.179709 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:22:58.179720 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179731 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:22:58.179742 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179753 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:22:58.179764 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179775 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:22:58.179786 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.179797 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:22:58.179808 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179819 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:22:58.179836 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.179847 | orchestrator | 2025-07-06 20:22:58.179858 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-06 20:22:58.179869 | orchestrator | Sunday 06 July 2025 20:20:29 +0000 (0:00:02.128) 0:01:38.249 *********** 2025-07-06 20:22:58.179881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.179892 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.179914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.179926 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.179938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.179949 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.179961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.179978 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.179989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.180001 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.180024 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180034 | orchestrator | 2025-07-06 20:22:58.180045 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-06 20:22:58.180061 | orchestrator | Sunday 06 July 2025 20:20:31 +0000 (0:00:02.640) 0:01:40.890 *********** 2025-07-06 20:22:58.180079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.180090 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.180137 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.180159 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.180182 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.180210 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.180239 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180250 | orchestrator | 2025-07-06 20:22:58.180261 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-06 20:22:58.180272 | orchestrator | Sunday 06 July 2025 20:20:34 +0000 (0:00:02.198) 0:01:43.088 *********** 2025-07-06 20:22:58.180283 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180294 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180305 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180322 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180333 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180344 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180354 | orchestrator | 2025-07-06 20:22:58.180365 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-06 20:22:58.180376 | orchestrator | Sunday 06 July 2025 20:20:36 +0000 (0:00:02.542) 0:01:45.631 *********** 2025-07-06 20:22:58.180387 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180398 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180409 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180420 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:22:58.180431 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:22:58.180441 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:22:58.180452 | orchestrator | 2025-07-06 20:22:58.180463 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-06 20:22:58.180474 | orchestrator | Sunday 06 July 2025 20:20:39 +0000 (0:00:03.030) 0:01:48.661 *********** 2025-07-06 20:22:58.180485 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180496 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180507 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180518 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180528 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180539 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180550 | orchestrator | 2025-07-06 20:22:58.180562 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-06 20:22:58.180573 | orchestrator | Sunday 06 July 2025 20:20:42 +0000 (0:00:02.321) 0:01:50.982 *********** 2025-07-06 20:22:58.180583 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180594 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180605 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180616 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180627 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180638 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180648 | orchestrator | 2025-07-06 20:22:58.180659 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-06 20:22:58.180671 | orchestrator | Sunday 06 July 2025 20:20:43 +0000 (0:00:01.603) 0:01:52.585 *********** 2025-07-06 20:22:58.180681 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180692 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180703 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180714 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180724 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180735 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180746 | orchestrator | 2025-07-06 20:22:58.180757 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-06 20:22:58.180768 | orchestrator | Sunday 06 July 2025 20:20:46 +0000 (0:00:02.489) 0:01:55.075 *********** 2025-07-06 20:22:58.180779 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180790 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180800 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180811 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180822 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180833 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180844 | orchestrator | 2025-07-06 20:22:58.180855 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-06 20:22:58.180866 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:02.758) 0:01:57.834 *********** 2025-07-06 20:22:58.180877 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180887 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.180898 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.180909 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.180920 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.180930 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.180947 | orchestrator | 2025-07-06 20:22:58.180958 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-06 20:22:58.180974 | orchestrator | Sunday 06 July 2025 20:20:51 +0000 (0:00:03.098) 0:02:00.932 *********** 2025-07-06 20:22:58.180985 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.180996 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.181007 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.181018 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.181029 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.181039 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.181050 | orchestrator | 2025-07-06 20:22:58.181061 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-06 20:22:58.181072 | orchestrator | Sunday 06 July 2025 20:20:53 +0000 (0:00:02.002) 0:02:02.935 *********** 2025-07-06 20:22:58.181083 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.181148 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.181162 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.181173 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.181184 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.181195 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.181206 | orchestrator | 2025-07-06 20:22:58.181217 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-06 20:22:58.181228 | orchestrator | Sunday 06 July 2025 20:20:56 +0000 (0:00:02.429) 0:02:05.365 *********** 2025-07-06 20:22:58.181239 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.181250 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.181261 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.181271 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.181282 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.181293 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.181304 | orchestrator | 2025-07-06 20:22:58.181315 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-06 20:22:58.181326 | orchestrator | Sunday 06 July 2025 20:20:59 +0000 (0:00:02.929) 0:02:08.294 *********** 2025-07-06 20:22:58.181337 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:22:58.181347 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.181359 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:22:58.181370 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.181381 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:22:58.181392 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.181403 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:22:58.181414 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.181425 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:22:58.181436 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.181447 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:22:58.181458 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.181469 | orchestrator | 2025-07-06 20:22:58.181480 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-06 20:22:58.181491 | orchestrator | Sunday 06 July 2025 20:21:03 +0000 (0:00:03.755) 0:02:12.049 *********** 2025-07-06 20:22:58.181503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.181521 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.181533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.181543 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.181565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.181576 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.181586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:22:58.181596 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.181606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.181621 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.181631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:22:58.181641 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.181651 | orchestrator | 2025-07-06 20:22:58.181661 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-06 20:22:58.181671 | orchestrator | Sunday 06 July 2025 20:21:06 +0000 (0:00:02.961) 0:02:15.010 *********** 2025-07-06 20:22:58.181689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.181707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.181718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.181728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.181744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:22:58.181759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:22:58.181769 | orchestrator | 2025-07-06 20:22:58.181779 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:22:58.181794 | orchestrator | Sunday 06 July 2025 20:21:09 +0000 (0:00:03.266) 0:02:18.277 *********** 2025-07-06 20:22:58.181804 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:58.181814 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:58.181824 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:58.181834 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:22:58.181843 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:22:58.181853 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:22:58.181863 | orchestrator | 2025-07-06 20:22:58.181872 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-06 20:22:58.181882 | orchestrator | Sunday 06 July 2025 20:21:09 +0000 (0:00:00.435) 0:02:18.713 *********** 2025-07-06 20:22:58.181892 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:58.181901 | orchestrator | 2025-07-06 20:22:58.181911 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-06 20:22:58.181921 | orchestrator | Sunday 06 July 2025 20:21:11 +0000 (0:00:01.966) 0:02:20.679 *********** 2025-07-06 20:22:58.181930 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:58.181940 | orchestrator | 2025-07-06 20:22:58.181950 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-06 20:22:58.181959 | orchestrator | Sunday 06 July 2025 20:21:13 +0000 (0:00:01.744) 0:02:22.424 *********** 2025-07-06 20:22:58.181969 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:58.181979 | orchestrator | 2025-07-06 20:22:58.181988 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:22:58.182008 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:39.920) 0:03:02.344 *********** 2025-07-06 20:22:58.182043 | orchestrator | 2025-07-06 20:22:58.182055 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:22:58.182065 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:00.062) 0:03:02.406 *********** 2025-07-06 20:22:58.182075 | orchestrator | 2025-07-06 20:22:58.182085 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:22:58.182096 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:00.244) 0:03:02.651 *********** 2025-07-06 20:22:58.182148 | orchestrator | 2025-07-06 20:22:58.182158 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:22:58.182167 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:00.058) 0:03:02.710 *********** 2025-07-06 20:22:58.182175 | orchestrator | 2025-07-06 20:22:58.182183 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:22:58.182191 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:00.059) 0:03:02.770 *********** 2025-07-06 20:22:58.182199 | orchestrator | 2025-07-06 20:22:58.182207 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:22:58.182215 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:00.104) 0:03:02.874 *********** 2025-07-06 20:22:58.182223 | orchestrator | 2025-07-06 20:22:58.182231 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-06 20:22:58.182239 | orchestrator | Sunday 06 July 2025 20:21:54 +0000 (0:00:00.099) 0:03:02.974 *********** 2025-07-06 20:22:58.182247 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:58.182255 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:58.182263 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:58.182271 | orchestrator | 2025-07-06 20:22:58.182279 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-06 20:22:58.182287 | orchestrator | Sunday 06 July 2025 20:22:25 +0000 (0:00:31.553) 0:03:34.528 *********** 2025-07-06 20:22:58.182295 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:22:58.182303 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:22:58.182311 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:22:58.182319 | orchestrator | 2025-07-06 20:22:58.182327 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:22:58.182335 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 20:22:58.182343 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-06 20:22:58.182352 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-06 20:22:58.182360 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-06 20:22:58.182368 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-06 20:22:58.182376 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-06 20:22:58.182384 | orchestrator | 2025-07-06 20:22:58.182392 | orchestrator | 2025-07-06 20:22:58.182400 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:22:58.182408 | orchestrator | Sunday 06 July 2025 20:22:55 +0000 (0:00:30.362) 0:04:04.890 *********** 2025-07-06 20:22:58.182420 | orchestrator | =============================================================================== 2025-07-06 20:22:58.182428 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.92s 2025-07-06 20:22:58.182442 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.55s 2025-07-06 20:22:58.182450 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 30.36s 2025-07-06 20:22:58.182458 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.61s 2025-07-06 20:22:58.182470 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.71s 2025-07-06 20:22:58.182479 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.35s 2025-07-06 20:22:58.182487 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.25s 2025-07-06 20:22:58.182495 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.87s 2025-07-06 20:22:58.182503 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.85s 2025-07-06 20:22:58.182511 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.80s 2025-07-06 20:22:58.182519 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.76s 2025-07-06 20:22:58.182526 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.72s 2025-07-06 20:22:58.182534 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.55s 2025-07-06 20:22:58.182542 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.45s 2025-07-06 20:22:58.182550 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.40s 2025-07-06 20:22:58.182558 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.27s 2025-07-06 20:22:58.182566 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.21s 2025-07-06 20:22:58.182574 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.20s 2025-07-06 20:22:58.182582 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.10s 2025-07-06 20:22:58.182590 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.05s 2025-07-06 20:22:58.182598 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:22:58.182606 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:22:58.182615 | orchestrator | 2025-07-06 20:22:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:01.225413 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:01.226931 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:23:01.228449 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:01.230095 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:01.230153 | orchestrator | 2025-07-06 20:23:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:04.270282 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:04.270538 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state STARTED 2025-07-06 20:23:04.271037 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:04.272219 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:04.272283 | orchestrator | 2025-07-06 20:23:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:07.312542 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:07.314295 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task b5f106fe-1acd-4005-9ee4-842b8aae5f25 is in state SUCCESS 2025-07-06 20:23:07.315925 | orchestrator | 2025-07-06 20:23:07.315963 | orchestrator | 2025-07-06 20:23:07.315976 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:23:07.315988 | orchestrator | 2025-07-06 20:23:07.315999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:23:07.316011 | orchestrator | Sunday 06 July 2025 20:20:18 +0000 (0:00:00.424) 0:00:00.424 *********** 2025-07-06 20:23:07.316022 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:23:07.316034 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:23:07.316045 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:23:07.316056 | orchestrator | 2025-07-06 20:23:07.316067 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:23:07.316078 | orchestrator | Sunday 06 July 2025 20:20:18 +0000 (0:00:00.266) 0:00:00.690 *********** 2025-07-06 20:23:07.316090 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-06 20:23:07.316130 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-06 20:23:07.316157 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-06 20:23:07.316169 | orchestrator | 2025-07-06 20:23:07.316180 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-06 20:23:07.316190 | orchestrator | 2025-07-06 20:23:07.316202 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:07.316213 | orchestrator | Sunday 06 July 2025 20:20:18 +0000 (0:00:00.339) 0:00:01.029 *********** 2025-07-06 20:23:07.316224 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:23:07.316239 | orchestrator | 2025-07-06 20:23:07.316257 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-06 20:23:07.316276 | orchestrator | Sunday 06 July 2025 20:20:19 +0000 (0:00:00.851) 0:00:01.881 *********** 2025-07-06 20:23:07.316304 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-06 20:23:07.316326 | orchestrator | 2025-07-06 20:23:07.316344 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-06 20:23:07.317130 | orchestrator | Sunday 06 July 2025 20:20:23 +0000 (0:00:03.548) 0:00:05.431 *********** 2025-07-06 20:23:07.317147 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-06 20:23:07.317159 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-06 20:23:07.317170 | orchestrator | 2025-07-06 20:23:07.317182 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-06 20:23:07.317193 | orchestrator | Sunday 06 July 2025 20:20:29 +0000 (0:00:06.422) 0:00:11.854 *********** 2025-07-06 20:23:07.317204 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:23:07.317215 | orchestrator | 2025-07-06 20:23:07.317226 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-06 20:23:07.317237 | orchestrator | Sunday 06 July 2025 20:20:32 +0000 (0:00:03.190) 0:00:15.044 *********** 2025-07-06 20:23:07.317248 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:23:07.317259 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-06 20:23:07.317270 | orchestrator | 2025-07-06 20:23:07.317281 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-06 20:23:07.317292 | orchestrator | Sunday 06 July 2025 20:20:36 +0000 (0:00:03.786) 0:00:18.830 *********** 2025-07-06 20:23:07.317303 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:23:07.317313 | orchestrator | 2025-07-06 20:23:07.317324 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-06 20:23:07.317335 | orchestrator | Sunday 06 July 2025 20:20:40 +0000 (0:00:03.616) 0:00:22.446 *********** 2025-07-06 20:23:07.317346 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-06 20:23:07.317372 | orchestrator | 2025-07-06 20:23:07.317383 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-06 20:23:07.317394 | orchestrator | Sunday 06 July 2025 20:20:44 +0000 (0:00:04.312) 0:00:26.759 *********** 2025-07-06 20:23:07.317708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.317773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.317797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.317811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.317994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318181 | orchestrator | 2025-07-06 20:23:07.318193 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-06 20:23:07.318206 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:03.469) 0:00:30.229 *********** 2025-07-06 20:23:07.318218 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.318230 | orchestrator | 2025-07-06 20:23:07.318242 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-06 20:23:07.318254 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.144) 0:00:30.373 *********** 2025-07-06 20:23:07.318266 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.318294 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:07.318307 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:07.318318 | orchestrator | 2025-07-06 20:23:07.318330 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:07.318341 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.365) 0:00:30.739 *********** 2025-07-06 20:23:07.318353 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:23:07.318365 | orchestrator | 2025-07-06 20:23:07.318376 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-06 20:23:07.318388 | orchestrator | Sunday 06 July 2025 20:20:49 +0000 (0:00:01.344) 0:00:32.083 *********** 2025-07-06 20:23:07.318435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.318457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.318469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.318488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.318755 | orchestrator | 2025-07-06 20:23:07.318766 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-06 20:23:07.318777 | orchestrator | Sunday 06 July 2025 20:20:57 +0000 (0:00:07.263) 0:00:39.347 *********** 2025-07-06 20:23:07.318789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.318829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.318847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.318866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.318878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.318890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.318902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.318914 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:07.318953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.318971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.318990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319026 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.319037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.319078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.319171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319222 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:07.319234 | orchestrator | 2025-07-06 20:23:07.319245 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-06 20:23:07.319256 | orchestrator | Sunday 06 July 2025 20:20:58 +0000 (0:00:01.569) 0:00:40.916 *********** 2025-07-06 20:23:07.319268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.319313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.319340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.319387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.319446 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.319463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.319510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.319572 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:07.319588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.319629 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:07.319639 | orchestrator | 2025-07-06 20:23:07.319649 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-06 20:23:07.319673 | orchestrator | Sunday 06 July 2025 20:21:01 +0000 (0:00:02.495) 0:00:43.412 *********** 2025-07-06 20:23:07.319683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.319742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.319755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.319765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.319998 | orchestrator | 2025-07-06 20:23:07.320008 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-06 20:23:07.320018 | orchestrator | Sunday 06 July 2025 20:21:08 +0000 (0:00:07.044) 0:00:50.456 *********** 2025-07-06 20:23:07.320029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.320071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.320087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.320098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320319 | orchestrator | 2025-07-06 20:23:07.320329 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-06 20:23:07.320339 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:14.173) 0:01:04.630 *********** 2025-07-06 20:23:07.320349 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-06 20:23:07.320359 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-06 20:23:07.320369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-06 20:23:07.320378 | orchestrator | 2025-07-06 20:23:07.320388 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-06 20:23:07.320398 | orchestrator | Sunday 06 July 2025 20:21:26 +0000 (0:00:04.021) 0:01:08.652 *********** 2025-07-06 20:23:07.320408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-06 20:23:07.320417 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-06 20:23:07.320427 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-06 20:23:07.320437 | orchestrator | 2025-07-06 20:23:07.320446 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-06 20:23:07.320456 | orchestrator | Sunday 06 July 2025 20:21:28 +0000 (0:00:02.331) 0:01:10.983 *********** 2025-07-06 20:23:07.320471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.320487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.320497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.320513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320719 | orchestrator | 2025-07-06 20:23:07.320729 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-06 20:23:07.320739 | orchestrator | Sunday 06 July 2025 20:21:31 +0000 (0:00:03.061) 0:01:14.045 *********** 2025-07-06 20:23:07.320754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.320769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.320780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.320795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.320950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.320981 | orchestrator | 2025-07-06 20:23:07.320991 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:07.321001 | orchestrator | Sunday 06 July 2025 20:21:34 +0000 (0:00:02.941) 0:01:16.986 *********** 2025-07-06 20:23:07.321011 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.321020 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:07.321030 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:07.321040 | orchestrator | 2025-07-06 20:23:07.321050 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-06 20:23:07.321059 | orchestrator | Sunday 06 July 2025 20:21:35 +0000 (0:00:00.404) 0:01:17.391 *********** 2025-07-06 20:23:07.321075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.321090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.321123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321165 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.321181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.321196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.321212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321253 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:07.321268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:07.321288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:07.321298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:07.321339 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:07.321349 | orchestrator | 2025-07-06 20:23:07.321359 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-06 20:23:07.321369 | orchestrator | Sunday 06 July 2025 20:21:35 +0000 (0:00:00.747) 0:01:18.138 *********** 2025-07-06 20:23:07.321385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.321408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.321419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:07.321429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:07.321612 | orchestrator | 2025-07-06 20:23:07.321622 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:07.321632 | orchestrator | Sunday 06 July 2025 20:21:40 +0000 (0:00:04.197) 0:01:22.335 *********** 2025-07-06 20:23:07.321642 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:07.321659 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:07.321669 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:07.321678 | orchestrator | 2025-07-06 20:23:07.321688 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-06 20:23:07.321698 | orchestrator | Sunday 06 July 2025 20:21:40 +0000 (0:00:00.247) 0:01:22.583 *********** 2025-07-06 20:23:07.321708 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-06 20:23:07.321718 | orchestrator | 2025-07-06 20:23:07.321728 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-06 20:23:07.321737 | orchestrator | Sunday 06 July 2025 20:21:42 +0000 (0:00:02.129) 0:01:24.712 *********** 2025-07-06 20:23:07.321747 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:23:07.321757 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-06 20:23:07.321766 | orchestrator | 2025-07-06 20:23:07.321776 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-06 20:23:07.321790 | orchestrator | Sunday 06 July 2025 20:21:44 +0000 (0:00:02.178) 0:01:26.890 *********** 2025-07-06 20:23:07.321801 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.321810 | orchestrator | 2025-07-06 20:23:07.321820 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-06 20:23:07.321830 | orchestrator | Sunday 06 July 2025 20:21:59 +0000 (0:00:14.872) 0:01:41.763 *********** 2025-07-06 20:23:07.321839 | orchestrator | 2025-07-06 20:23:07.321849 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-06 20:23:07.321859 | orchestrator | Sunday 06 July 2025 20:21:59 +0000 (0:00:00.074) 0:01:41.837 *********** 2025-07-06 20:23:07.321868 | orchestrator | 2025-07-06 20:23:07.321878 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-06 20:23:07.321888 | orchestrator | Sunday 06 July 2025 20:21:59 +0000 (0:00:00.060) 0:01:41.898 *********** 2025-07-06 20:23:07.321898 | orchestrator | 2025-07-06 20:23:07.321907 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-06 20:23:07.321917 | orchestrator | Sunday 06 July 2025 20:21:59 +0000 (0:00:00.065) 0:01:41.963 *********** 2025-07-06 20:23:07.321927 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:07.321937 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.321947 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:07.321956 | orchestrator | 2025-07-06 20:23:07.321966 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-06 20:23:07.321976 | orchestrator | Sunday 06 July 2025 20:22:14 +0000 (0:00:14.531) 0:01:56.495 *********** 2025-07-06 20:23:07.321986 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:07.321995 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:07.322005 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.322040 | orchestrator | 2025-07-06 20:23:07.322053 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-06 20:23:07.322064 | orchestrator | Sunday 06 July 2025 20:22:22 +0000 (0:00:08.203) 0:02:04.698 *********** 2025-07-06 20:23:07.322074 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.322084 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:07.322094 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:07.322128 | orchestrator | 2025-07-06 20:23:07.322138 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-06 20:23:07.322148 | orchestrator | Sunday 06 July 2025 20:22:28 +0000 (0:00:06.471) 0:02:11.169 *********** 2025-07-06 20:23:07.322157 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:07.322167 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.322177 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:07.322186 | orchestrator | 2025-07-06 20:23:07.322196 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-06 20:23:07.322206 | orchestrator | Sunday 06 July 2025 20:22:41 +0000 (0:00:12.615) 0:02:23.785 *********** 2025-07-06 20:23:07.322215 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.322231 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:07.322241 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:07.322251 | orchestrator | 2025-07-06 20:23:07.322261 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-06 20:23:07.322270 | orchestrator | Sunday 06 July 2025 20:22:48 +0000 (0:00:07.030) 0:02:30.815 *********** 2025-07-06 20:23:07.322280 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.322290 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:07.322299 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:07.322309 | orchestrator | 2025-07-06 20:23:07.322319 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-06 20:23:07.322328 | orchestrator | Sunday 06 July 2025 20:22:59 +0000 (0:00:11.242) 0:02:42.058 *********** 2025-07-06 20:23:07.322338 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:07.322348 | orchestrator | 2025-07-06 20:23:07.322357 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:23:07.322367 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:23:07.322378 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:23:07.322388 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:23:07.322397 | orchestrator | 2025-07-06 20:23:07.322407 | orchestrator | 2025-07-06 20:23:07.322417 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:23:07.322427 | orchestrator | Sunday 06 July 2025 20:23:06 +0000 (0:00:06.823) 0:02:48.882 *********** 2025-07-06 20:23:07.322436 | orchestrator | =============================================================================== 2025-07-06 20:23:07.322446 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.87s 2025-07-06 20:23:07.322455 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.53s 2025-07-06 20:23:07.322465 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.17s 2025-07-06 20:23:07.322474 | orchestrator | designate : Restart designate-producer container ----------------------- 12.62s 2025-07-06 20:23:07.322484 | orchestrator | designate : Restart designate-worker container ------------------------- 11.24s 2025-07-06 20:23:07.322494 | orchestrator | designate : Restart designate-api container ----------------------------- 8.20s 2025-07-06 20:23:07.322503 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.26s 2025-07-06 20:23:07.322513 | orchestrator | designate : Copying over config.json files for services ----------------- 7.04s 2025-07-06 20:23:07.322523 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.03s 2025-07-06 20:23:07.322532 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.82s 2025-07-06 20:23:07.322547 | orchestrator | designate : Restart designate-central container ------------------------- 6.47s 2025-07-06 20:23:07.322557 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.42s 2025-07-06 20:23:07.322567 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.31s 2025-07-06 20:23:07.322577 | orchestrator | designate : Check designate containers ---------------------------------- 4.20s 2025-07-06 20:23:07.322623 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.02s 2025-07-06 20:23:07.322634 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.78s 2025-07-06 20:23:07.322644 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.62s 2025-07-06 20:23:07.322654 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.55s 2025-07-06 20:23:07.322663 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.47s 2025-07-06 20:23:07.322677 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.19s 2025-07-06 20:23:07.322693 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:07.322703 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:07.322713 | orchestrator | 2025-07-06 20:23:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:10.340960 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:10.342169 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:10.342800 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:10.344006 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:10.344155 | orchestrator | 2025-07-06 20:23:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:13.392737 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:13.393568 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:13.394758 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:13.396290 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:13.396317 | orchestrator | 2025-07-06 20:23:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:16.439973 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:16.441777 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:16.442557 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:16.448546 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:16.448633 | orchestrator | 2025-07-06 20:23:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:19.499558 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:19.501683 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:19.503812 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:19.505370 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:19.505397 | orchestrator | 2025-07-06 20:23:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:22.551832 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:22.553982 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:22.555800 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:22.557413 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:22.557439 | orchestrator | 2025-07-06 20:23:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:25.605327 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:25.605415 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:25.605425 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:25.606332 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:25.606373 | orchestrator | 2025-07-06 20:23:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:28.648306 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:28.649805 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:28.652632 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:28.653898 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:28.653941 | orchestrator | 2025-07-06 20:23:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:31.691389 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:31.692906 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:31.694534 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:31.695989 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:31.696281 | orchestrator | 2025-07-06 20:23:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:34.741783 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:34.742779 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:34.744496 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:34.745987 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:34.746011 | orchestrator | 2025-07-06 20:23:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:37.785756 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:37.787394 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:37.788953 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:37.790620 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:37.790650 | orchestrator | 2025-07-06 20:23:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:40.837255 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:40.838721 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:40.840393 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:40.843250 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:40.843318 | orchestrator | 2025-07-06 20:23:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:43.891013 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:43.892028 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:43.894617 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:43.896367 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:43.897101 | orchestrator | 2025-07-06 20:23:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:46.951831 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:46.954008 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:46.956064 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:46.958334 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:46.958387 | orchestrator | 2025-07-06 20:23:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:49.994854 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:49.994976 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:49.995691 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:49.997214 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:49.997239 | orchestrator | 2025-07-06 20:23:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:53.032511 | orchestrator | 2025-07-06 20:23:53 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:53.032721 | orchestrator | 2025-07-06 20:23:53 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:53.034398 | orchestrator | 2025-07-06 20:23:53 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:53.034425 | orchestrator | 2025-07-06 20:23:53 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:53.034437 | orchestrator | 2025-07-06 20:23:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:56.079710 | orchestrator | 2025-07-06 20:23:56 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:56.083713 | orchestrator | 2025-07-06 20:23:56 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:56.086280 | orchestrator | 2025-07-06 20:23:56 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:56.092855 | orchestrator | 2025-07-06 20:23:56 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:56.092936 | orchestrator | 2025-07-06 20:23:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:59.141877 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:23:59.144040 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:23:59.146329 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:23:59.147929 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:23:59.147978 | orchestrator | 2025-07-06 20:23:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:02.188396 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:24:02.189804 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:02.191516 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:02.192767 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:02.192794 | orchestrator | 2025-07-06 20:24:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:05.239297 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state STARTED 2025-07-06 20:24:05.240344 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:05.241564 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:05.242631 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:05.242679 | orchestrator | 2025-07-06 20:24:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:08.287666 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task da5ab882-29af-461c-9105-2975e5204a28 is in state SUCCESS 2025-07-06 20:24:08.288801 | orchestrator | 2025-07-06 20:24:08.288845 | orchestrator | 2025-07-06 20:24:08.288858 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:24:08.288870 | orchestrator | 2025-07-06 20:24:08.288882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:24:08.288894 | orchestrator | Sunday 06 July 2025 20:22:59 +0000 (0:00:00.227) 0:00:00.227 *********** 2025-07-06 20:24:08.288911 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:24:08.288929 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:24:08.288949 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:24:08.288968 | orchestrator | 2025-07-06 20:24:08.288983 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:24:08.288995 | orchestrator | Sunday 06 July 2025 20:22:59 +0000 (0:00:00.254) 0:00:00.482 *********** 2025-07-06 20:24:08.289006 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-06 20:24:08.289017 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-06 20:24:08.289028 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-06 20:24:08.289039 | orchestrator | 2025-07-06 20:24:08.289050 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-06 20:24:08.289061 | orchestrator | 2025-07-06 20:24:08.289071 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-06 20:24:08.289082 | orchestrator | Sunday 06 July 2025 20:23:00 +0000 (0:00:00.355) 0:00:00.837 *********** 2025-07-06 20:24:08.289093 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:24:08.289315 | orchestrator | 2025-07-06 20:24:08.289334 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-06 20:24:08.289346 | orchestrator | Sunday 06 July 2025 20:23:00 +0000 (0:00:00.502) 0:00:01.340 *********** 2025-07-06 20:24:08.289357 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-06 20:24:08.289368 | orchestrator | 2025-07-06 20:24:08.289404 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-06 20:24:08.289417 | orchestrator | Sunday 06 July 2025 20:23:04 +0000 (0:00:03.444) 0:00:04.784 *********** 2025-07-06 20:24:08.289428 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-06 20:24:08.289439 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-06 20:24:08.289450 | orchestrator | 2025-07-06 20:24:08.289461 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-06 20:24:08.289472 | orchestrator | Sunday 06 July 2025 20:23:10 +0000 (0:00:06.400) 0:00:11.184 *********** 2025-07-06 20:24:08.289483 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:24:08.289494 | orchestrator | 2025-07-06 20:24:08.289504 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-06 20:24:08.289515 | orchestrator | Sunday 06 July 2025 20:23:13 +0000 (0:00:03.306) 0:00:14.491 *********** 2025-07-06 20:24:08.289526 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:24:08.289537 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-06 20:24:08.289548 | orchestrator | 2025-07-06 20:24:08.289558 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-06 20:24:08.289569 | orchestrator | Sunday 06 July 2025 20:23:17 +0000 (0:00:03.904) 0:00:18.396 *********** 2025-07-06 20:24:08.289580 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:24:08.289591 | orchestrator | 2025-07-06 20:24:08.289602 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-06 20:24:08.289612 | orchestrator | Sunday 06 July 2025 20:23:21 +0000 (0:00:03.390) 0:00:21.786 *********** 2025-07-06 20:24:08.289623 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-06 20:24:08.289634 | orchestrator | 2025-07-06 20:24:08.289645 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-06 20:24:08.289655 | orchestrator | Sunday 06 July 2025 20:23:25 +0000 (0:00:03.992) 0:00:25.779 *********** 2025-07-06 20:24:08.289666 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:08.289677 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:08.289688 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:08.289699 | orchestrator | 2025-07-06 20:24:08.289709 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-06 20:24:08.289720 | orchestrator | Sunday 06 July 2025 20:23:25 +0000 (0:00:00.271) 0:00:26.051 *********** 2025-07-06 20:24:08.289735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.289767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.289789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.289801 | orchestrator | 2025-07-06 20:24:08.289812 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-06 20:24:08.289823 | orchestrator | Sunday 06 July 2025 20:23:26 +0000 (0:00:00.799) 0:00:26.851 *********** 2025-07-06 20:24:08.289834 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:08.289845 | orchestrator | 2025-07-06 20:24:08.289856 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-06 20:24:08.289867 | orchestrator | Sunday 06 July 2025 20:23:26 +0000 (0:00:00.120) 0:00:26.971 *********** 2025-07-06 20:24:08.289878 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:08.289888 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:08.289899 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:08.289910 | orchestrator | 2025-07-06 20:24:08.289921 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-06 20:24:08.289931 | orchestrator | Sunday 06 July 2025 20:23:26 +0000 (0:00:00.381) 0:00:27.353 *********** 2025-07-06 20:24:08.289944 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:24:08.289956 | orchestrator | 2025-07-06 20:24:08.289970 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-06 20:24:08.289983 | orchestrator | Sunday 06 July 2025 20:23:27 +0000 (0:00:00.447) 0:00:27.800 *********** 2025-07-06 20:24:08.289996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290182 | orchestrator | 2025-07-06 20:24:08.290202 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-06 20:24:08.290219 | orchestrator | Sunday 06 July 2025 20:23:28 +0000 (0:00:01.446) 0:00:29.246 *********** 2025-07-06 20:24:08.290239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290258 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:08.290277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290296 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:08.290322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290343 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:08.290354 | orchestrator | 2025-07-06 20:24:08.290365 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-06 20:24:08.290376 | orchestrator | Sunday 06 July 2025 20:23:29 +0000 (0:00:00.618) 0:00:29.865 *********** 2025-07-06 20:24:08.290388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290399 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:08.290411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290422 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:08.290433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290445 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:08.290456 | orchestrator | 2025-07-06 20:24:08.290474 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-06 20:24:08.290485 | orchestrator | Sunday 06 July 2025 20:23:29 +0000 (0:00:00.616) 0:00:30.482 *********** 2025-07-06 20:24:08.290504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290539 | orchestrator | 2025-07-06 20:24:08.290551 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-06 20:24:08.290561 | orchestrator | Sunday 06 July 2025 20:23:31 +0000 (0:00:01.323) 0:00:31.805 *********** 2025-07-06 20:24:08.290572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290622 | orchestrator | 2025-07-06 20:24:08.290633 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-06 20:24:08.290644 | orchestrator | Sunday 06 July 2025 20:23:33 +0000 (0:00:02.306) 0:00:34.112 *********** 2025-07-06 20:24:08.290655 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-06 20:24:08.290666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-06 20:24:08.290677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-06 20:24:08.290688 | orchestrator | 2025-07-06 20:24:08.290698 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-06 20:24:08.290709 | orchestrator | Sunday 06 July 2025 20:23:34 +0000 (0:00:01.423) 0:00:35.536 *********** 2025-07-06 20:24:08.290720 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:08.290731 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:08.290742 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:08.290752 | orchestrator | 2025-07-06 20:24:08.290763 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-06 20:24:08.290774 | orchestrator | Sunday 06 July 2025 20:23:36 +0000 (0:00:01.358) 0:00:36.894 *********** 2025-07-06 20:24:08.290785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290803 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:08.290814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290825 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:08.290844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:24:08.290856 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:08.290868 | orchestrator | 2025-07-06 20:24:08.290878 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-06 20:24:08.290889 | orchestrator | Sunday 06 July 2025 20:23:36 +0000 (0:00:00.486) 0:00:37.381 *********** 2025-07-06 20:24:08.290900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:24:08.290941 | orchestrator | 2025-07-06 20:24:08.290952 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-06 20:24:08.290963 | orchestrator | Sunday 06 July 2025 20:23:38 +0000 (0:00:01.370) 0:00:38.751 *********** 2025-07-06 20:24:08.290974 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:08.290985 | orchestrator | 2025-07-06 20:24:08.290996 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-06 20:24:08.291006 | orchestrator | Sunday 06 July 2025 20:23:40 +0000 (0:00:02.370) 0:00:41.122 *********** 2025-07-06 20:24:08.291017 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:08.291028 | orchestrator | 2025-07-06 20:24:08.291039 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-06 20:24:08.291050 | orchestrator | Sunday 06 July 2025 20:23:42 +0000 (0:00:02.359) 0:00:43.481 *********** 2025-07-06 20:24:08.291066 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:08.291077 | orchestrator | 2025-07-06 20:24:08.291088 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-06 20:24:08.291099 | orchestrator | Sunday 06 July 2025 20:23:55 +0000 (0:00:12.803) 0:00:56.284 *********** 2025-07-06 20:24:08.291110 | orchestrator | 2025-07-06 20:24:08.291121 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-06 20:24:08.291158 | orchestrator | Sunday 06 July 2025 20:23:55 +0000 (0:00:00.111) 0:00:56.396 *********** 2025-07-06 20:24:08.291171 | orchestrator | 2025-07-06 20:24:08.291182 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-06 20:24:08.291193 | orchestrator | Sunday 06 July 2025 20:23:55 +0000 (0:00:00.089) 0:00:56.485 *********** 2025-07-06 20:24:08.291204 | orchestrator | 2025-07-06 20:24:08.291215 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-06 20:24:08.291225 | orchestrator | Sunday 06 July 2025 20:23:56 +0000 (0:00:00.069) 0:00:56.554 *********** 2025-07-06 20:24:08.291236 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:08.291247 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:08.291258 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:08.291269 | orchestrator | 2025-07-06 20:24:08.291280 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:24:08.291298 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:24:08.291317 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:24:08.291345 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:24:08.291376 | orchestrator | 2025-07-06 20:24:08.291393 | orchestrator | 2025-07-06 20:24:08.291410 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:24:08.291427 | orchestrator | Sunday 06 July 2025 20:24:07 +0000 (0:00:10.986) 0:01:07.541 *********** 2025-07-06 20:24:08.291443 | orchestrator | =============================================================================== 2025-07-06 20:24:08.291461 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.80s 2025-07-06 20:24:08.291481 | orchestrator | placement : Restart placement-api container ---------------------------- 10.99s 2025-07-06 20:24:08.291499 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.40s 2025-07-06 20:24:08.291518 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.99s 2025-07-06 20:24:08.291530 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.90s 2025-07-06 20:24:08.291540 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.44s 2025-07-06 20:24:08.291551 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.39s 2025-07-06 20:24:08.291562 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.31s 2025-07-06 20:24:08.291572 | orchestrator | placement : Creating placement databases -------------------------------- 2.37s 2025-07-06 20:24:08.291583 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-07-06 20:24:08.291593 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.31s 2025-07-06 20:24:08.291604 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.45s 2025-07-06 20:24:08.291615 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.42s 2025-07-06 20:24:08.291625 | orchestrator | placement : Check placement containers ---------------------------------- 1.37s 2025-07-06 20:24:08.291636 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.36s 2025-07-06 20:24:08.291646 | orchestrator | placement : Copying over config.json files for services ----------------- 1.32s 2025-07-06 20:24:08.291660 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.80s 2025-07-06 20:24:08.291677 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.62s 2025-07-06 20:24:08.291697 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2025-07-06 20:24:08.291715 | orchestrator | placement : include_tasks ----------------------------------------------- 0.50s 2025-07-06 20:24:08.291731 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:08.291743 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:08.291754 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:08.291765 | orchestrator | 2025-07-06 20:24:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:11.325859 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:11.326545 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:11.327358 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:11.328026 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:11.328046 | orchestrator | 2025-07-06 20:24:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:14.373249 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:14.374293 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:14.374683 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:14.375726 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:14.375759 | orchestrator | 2025-07-06 20:24:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:17.404603 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:17.406262 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:17.408025 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:17.412304 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:17.412370 | orchestrator | 2025-07-06 20:24:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:20.448471 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:20.449108 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:20.450577 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:20.451503 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:20.451528 | orchestrator | 2025-07-06 20:24:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:23.491305 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:23.491731 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:23.492841 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:23.494126 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:23.494188 | orchestrator | 2025-07-06 20:24:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:26.535186 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:26.535239 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:26.536932 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:26.537949 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:26.538008 | orchestrator | 2025-07-06 20:24:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:29.576868 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:29.584320 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:29.584365 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:29.584900 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:29.584923 | orchestrator | 2025-07-06 20:24:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:32.634541 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:32.636000 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:32.637346 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:32.639239 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:32.639274 | orchestrator | 2025-07-06 20:24:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:35.673989 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:35.674232 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:35.674265 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:35.675345 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:35.675456 | orchestrator | 2025-07-06 20:24:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:38.708339 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:38.708445 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:38.708591 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:38.709438 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:38.709530 | orchestrator | 2025-07-06 20:24:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:41.734281 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:41.736228 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:41.739741 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:41.741407 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:41.741673 | orchestrator | 2025-07-06 20:24:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:44.773534 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:44.773788 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:44.774410 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:44.774933 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:44.774958 | orchestrator | 2025-07-06 20:24:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:47.820333 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:47.820736 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:47.821307 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:47.821972 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:47.822007 | orchestrator | 2025-07-06 20:24:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:50.866007 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:50.867801 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:50.869033 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:50.870951 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:50.871004 | orchestrator | 2025-07-06 20:24:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:53.916947 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:53.918582 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:53.919882 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:53.921529 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:53.921569 | orchestrator | 2025-07-06 20:24:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:56.977989 | orchestrator | 2025-07-06 20:24:56 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:24:56.980591 | orchestrator | 2025-07-06 20:24:56 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:24:56.982262 | orchestrator | 2025-07-06 20:24:56 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:24:56.984396 | orchestrator | 2025-07-06 20:24:56 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:24:56.984430 | orchestrator | 2025-07-06 20:24:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:00.026943 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:00.027485 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:25:00.028688 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:00.029309 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:00.029345 | orchestrator | 2025-07-06 20:25:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:03.076840 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:03.078240 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state STARTED 2025-07-06 20:25:03.079161 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:03.081043 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:03.081111 | orchestrator | 2025-07-06 20:25:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:06.139572 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:06.141769 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 989436ff-fda1-457d-a32a-683409215f5b is in state STARTED 2025-07-06 20:25:06.144572 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 8de50800-7aff-4103-98b5-a37e83453b19 is in state SUCCESS 2025-07-06 20:25:06.147040 | orchestrator | 2025-07-06 20:25:06.147106 | orchestrator | 2025-07-06 20:25:06.147122 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:06.147135 | orchestrator | 2025-07-06 20:25:06.147146 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:06.147158 | orchestrator | Sunday 06 July 2025 20:23:10 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-07-06 20:25:06.147201 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:06.147224 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:06.147244 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:06.147260 | orchestrator | 2025-07-06 20:25:06.147271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:06.147282 | orchestrator | Sunday 06 July 2025 20:23:11 +0000 (0:00:00.268) 0:00:00.535 *********** 2025-07-06 20:25:06.147293 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-06 20:25:06.147305 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-06 20:25:06.147316 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-06 20:25:06.147327 | orchestrator | 2025-07-06 20:25:06.147338 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-06 20:25:06.147348 | orchestrator | 2025-07-06 20:25:06.147360 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-06 20:25:06.147371 | orchestrator | Sunday 06 July 2025 20:23:11 +0000 (0:00:00.340) 0:00:00.876 *********** 2025-07-06 20:25:06.147382 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:06.147393 | orchestrator | 2025-07-06 20:25:06.147754 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-06 20:25:06.147789 | orchestrator | Sunday 06 July 2025 20:23:11 +0000 (0:00:00.481) 0:00:01.357 *********** 2025-07-06 20:25:06.147809 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-06 20:25:06.147828 | orchestrator | 2025-07-06 20:25:06.147849 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-06 20:25:06.147869 | orchestrator | Sunday 06 July 2025 20:23:15 +0000 (0:00:03.433) 0:00:04.791 *********** 2025-07-06 20:25:06.147884 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-06 20:25:06.147896 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-06 20:25:06.147907 | orchestrator | 2025-07-06 20:25:06.147918 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-06 20:25:06.147929 | orchestrator | Sunday 06 July 2025 20:23:21 +0000 (0:00:06.496) 0:00:11.287 *********** 2025-07-06 20:25:06.147940 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:25:06.147951 | orchestrator | 2025-07-06 20:25:06.147962 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-06 20:25:06.147973 | orchestrator | Sunday 06 July 2025 20:23:25 +0000 (0:00:03.338) 0:00:14.625 *********** 2025-07-06 20:25:06.147983 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:25:06.147994 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-06 20:25:06.148005 | orchestrator | 2025-07-06 20:25:06.148016 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-06 20:25:06.148027 | orchestrator | Sunday 06 July 2025 20:23:29 +0000 (0:00:03.882) 0:00:18.508 *********** 2025-07-06 20:25:06.148038 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:25:06.148049 | orchestrator | 2025-07-06 20:25:06.148060 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-06 20:25:06.148071 | orchestrator | Sunday 06 July 2025 20:23:32 +0000 (0:00:03.549) 0:00:22.058 *********** 2025-07-06 20:25:06.148101 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-06 20:25:06.148112 | orchestrator | 2025-07-06 20:25:06.148123 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-06 20:25:06.148134 | orchestrator | Sunday 06 July 2025 20:23:36 +0000 (0:00:04.134) 0:00:26.193 *********** 2025-07-06 20:25:06.148362 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.148380 | orchestrator | 2025-07-06 20:25:06.148391 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-06 20:25:06.148402 | orchestrator | Sunday 06 July 2025 20:23:40 +0000 (0:00:03.326) 0:00:29.519 *********** 2025-07-06 20:25:06.148413 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.148423 | orchestrator | 2025-07-06 20:25:06.148434 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-06 20:25:06.148445 | orchestrator | Sunday 06 July 2025 20:23:44 +0000 (0:00:04.455) 0:00:33.975 *********** 2025-07-06 20:25:06.148456 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.148467 | orchestrator | 2025-07-06 20:25:06.148477 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-06 20:25:06.148488 | orchestrator | Sunday 06 July 2025 20:23:48 +0000 (0:00:04.184) 0:00:38.159 *********** 2025-07-06 20:25:06.148518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.148534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.148546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.148568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.148580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.148602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.148614 | orchestrator | 2025-07-06 20:25:06.148626 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-06 20:25:06.148637 | orchestrator | Sunday 06 July 2025 20:23:50 +0000 (0:00:01.384) 0:00:39.544 *********** 2025-07-06 20:25:06.148648 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:06.148659 | orchestrator | 2025-07-06 20:25:06.148671 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-06 20:25:06.148693 | orchestrator | Sunday 06 July 2025 20:23:50 +0000 (0:00:00.162) 0:00:39.706 *********** 2025-07-06 20:25:06.148705 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:06.148716 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:06.148726 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:06.148737 | orchestrator | 2025-07-06 20:25:06.148748 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-06 20:25:06.148759 | orchestrator | Sunday 06 July 2025 20:23:50 +0000 (0:00:00.549) 0:00:40.256 *********** 2025-07-06 20:25:06.148770 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:25:06.148781 | orchestrator | 2025-07-06 20:25:06.148792 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-06 20:25:06.148803 | orchestrator | Sunday 06 July 2025 20:23:51 +0000 (0:00:00.907) 0:00:41.164 *********** 2025-07-06 20:25:06.148814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.148833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.148845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.148864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.148876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.148895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.148907 | orchestrator | 2025-07-06 20:25:06.148918 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-06 20:25:06.148929 | orchestrator | Sunday 06 July 2025 20:23:54 +0000 (0:00:02.679) 0:00:43.844 *********** 2025-07-06 20:25:06.148940 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:06.148951 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:06.148962 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:06.148973 | orchestrator | 2025-07-06 20:25:06.148984 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-06 20:25:06.148995 | orchestrator | Sunday 06 July 2025 20:23:54 +0000 (0:00:00.290) 0:00:44.134 *********** 2025-07-06 20:25:06.149009 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:06.149021 | orchestrator | 2025-07-06 20:25:06.149033 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-06 20:25:06.149046 | orchestrator | Sunday 06 July 2025 20:23:55 +0000 (0:00:00.722) 0:00:44.856 *********** 2025-07-06 20:25:06.149059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149154 | orchestrator | 2025-07-06 20:25:06.149166 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-06 20:25:06.149210 | orchestrator | Sunday 06 July 2025 20:23:57 +0000 (0:00:02.327) 0:00:47.183 *********** 2025-07-06 20:25:06.149231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149264 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:06.149278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149304 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:06.149316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149352 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:06.149363 | orchestrator | 2025-07-06 20:25:06.149374 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-06 20:25:06.149385 | orchestrator | Sunday 06 July 2025 20:23:58 +0000 (0:00:00.649) 0:00:47.832 *********** 2025-07-06 20:25:06.149396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149419 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:06.149430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149465 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:06.149477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149500 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:06.149511 | orchestrator | 2025-07-06 20:25:06.149522 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-06 20:25:06.149532 | orchestrator | Sunday 06 July 2025 20:23:59 +0000 (0:00:01.209) 0:00:49.042 *********** 2025-07-06 20:25:06.149543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149626 | orchestrator | 2025-07-06 20:25:06.149637 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-06 20:25:06.149648 | orchestrator | Sunday 06 July 2025 20:24:01 +0000 (0:00:02.375) 0:00:51.418 *********** 2025-07-06 20:25:06.149659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.149706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.149751 | orchestrator | 2025-07-06 20:25:06.149762 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-06 20:25:06.149779 | orchestrator | Sunday 06 July 2025 20:24:07 +0000 (0:00:05.111) 0:00:56.529 *********** 2025-07-06 20:25:06.149799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149838 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:06.149857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149903 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:06.149931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:06.149951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:06.149968 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:06.149987 | orchestrator | 2025-07-06 20:25:06.150007 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-06 20:25:06.150099 | orchestrator | Sunday 06 July 2025 20:24:07 +0000 (0:00:00.824) 0:00:57.354 *********** 2025-07-06 20:25:06.150121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.150142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.150198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:06.150238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.150259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.150279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:06.150299 | orchestrator | 2025-07-06 20:25:06.150319 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-06 20:25:06.150338 | orchestrator | Sunday 06 July 2025 20:24:10 +0000 (0:00:02.314) 0:00:59.668 *********** 2025-07-06 20:25:06.150358 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:06.150377 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:06.150396 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:06.150415 | orchestrator | 2025-07-06 20:25:06.150430 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-06 20:25:06.150441 | orchestrator | Sunday 06 July 2025 20:24:10 +0000 (0:00:00.346) 0:01:00.014 *********** 2025-07-06 20:25:06.150452 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.150477 | orchestrator | 2025-07-06 20:25:06.150488 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-06 20:25:06.150499 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:02.100) 0:01:02.115 *********** 2025-07-06 20:25:06.150510 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.150521 | orchestrator | 2025-07-06 20:25:06.150532 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-06 20:25:06.150542 | orchestrator | Sunday 06 July 2025 20:24:14 +0000 (0:00:02.069) 0:01:04.184 *********** 2025-07-06 20:25:06.150553 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.150564 | orchestrator | 2025-07-06 20:25:06.150575 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-06 20:25:06.150586 | orchestrator | Sunday 06 July 2025 20:24:28 +0000 (0:00:14.238) 0:01:18.422 *********** 2025-07-06 20:25:06.150597 | orchestrator | 2025-07-06 20:25:06.150607 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-06 20:25:06.150618 | orchestrator | Sunday 06 July 2025 20:24:29 +0000 (0:00:00.067) 0:01:18.490 *********** 2025-07-06 20:25:06.150629 | orchestrator | 2025-07-06 20:25:06.150640 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-06 20:25:06.150650 | orchestrator | Sunday 06 July 2025 20:24:29 +0000 (0:00:00.083) 0:01:18.574 *********** 2025-07-06 20:25:06.150661 | orchestrator | 2025-07-06 20:25:06.150672 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-06 20:25:06.150683 | orchestrator | Sunday 06 July 2025 20:24:29 +0000 (0:00:00.065) 0:01:18.639 *********** 2025-07-06 20:25:06.150694 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.150705 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:06.150715 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:06.150726 | orchestrator | 2025-07-06 20:25:06.150737 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-06 20:25:06.150748 | orchestrator | Sunday 06 July 2025 20:24:44 +0000 (0:00:15.587) 0:01:34.226 *********** 2025-07-06 20:25:06.150759 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:06.150770 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:06.150781 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:06.150792 | orchestrator | 2025-07-06 20:25:06.150810 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:06.150822 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:25:06.150834 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:25:06.150847 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:25:06.150864 | orchestrator | 2025-07-06 20:25:06.150875 | orchestrator | 2025-07-06 20:25:06.150886 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:06.150897 | orchestrator | Sunday 06 July 2025 20:25:02 +0000 (0:00:17.801) 0:01:52.028 *********** 2025-07-06 20:25:06.150908 | orchestrator | =============================================================================== 2025-07-06 20:25:06.150919 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 17.80s 2025-07-06 20:25:06.150930 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.59s 2025-07-06 20:25:06.150940 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.24s 2025-07-06 20:25:06.150951 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.50s 2025-07-06 20:25:06.150962 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.11s 2025-07-06 20:25:06.150973 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.46s 2025-07-06 20:25:06.150985 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.18s 2025-07-06 20:25:06.151013 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.13s 2025-07-06 20:25:06.151033 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.88s 2025-07-06 20:25:06.151051 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.55s 2025-07-06 20:25:06.151070 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.43s 2025-07-06 20:25:06.151087 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.34s 2025-07-06 20:25:06.151104 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.33s 2025-07-06 20:25:06.151122 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.68s 2025-07-06 20:25:06.151139 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.38s 2025-07-06 20:25:06.151155 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.33s 2025-07-06 20:25:06.151199 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.31s 2025-07-06 20:25:06.151219 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.10s 2025-07-06 20:25:06.151236 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.07s 2025-07-06 20:25:06.151255 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.38s 2025-07-06 20:25:06.151275 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:06.151295 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:06.151314 | orchestrator | 2025-07-06 20:25:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:09.208163 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:09.208871 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 989436ff-fda1-457d-a32a-683409215f5b is in state SUCCESS 2025-07-06 20:25:09.210935 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:09.212121 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:09.213410 | orchestrator | 2025-07-06 20:25:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:12.267484 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:12.269802 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:12.271651 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:12.273950 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:12.273985 | orchestrator | 2025-07-06 20:25:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:15.316563 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:15.319053 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:15.321547 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:15.324768 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:15.324801 | orchestrator | 2025-07-06 20:25:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:18.366589 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:18.367195 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:18.368913 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state STARTED 2025-07-06 20:25:18.370801 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:18.370830 | orchestrator | 2025-07-06 20:25:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:21.414322 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:21.416384 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:21.422440 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task 3ed89d5f-8a62-4b88-aad1-6265d8acc9b0 is in state SUCCESS 2025-07-06 20:25:21.425041 | orchestrator | 2025-07-06 20:25:21.425074 | orchestrator | 2025-07-06 20:25:21.425087 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:21.425100 | orchestrator | 2025-07-06 20:25:21.425111 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:21.425123 | orchestrator | Sunday 06 July 2025 20:25:07 +0000 (0:00:00.170) 0:00:00.170 *********** 2025-07-06 20:25:21.425134 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.425147 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:21.425158 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:21.425169 | orchestrator | 2025-07-06 20:25:21.425256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:21.425270 | orchestrator | Sunday 06 July 2025 20:25:07 +0000 (0:00:00.287) 0:00:00.458 *********** 2025-07-06 20:25:21.425282 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-06 20:25:21.425293 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-06 20:25:21.425304 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-06 20:25:21.425316 | orchestrator | 2025-07-06 20:25:21.425327 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-06 20:25:21.425338 | orchestrator | 2025-07-06 20:25:21.425761 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-06 20:25:21.425779 | orchestrator | Sunday 06 July 2025 20:25:08 +0000 (0:00:00.604) 0:00:01.062 *********** 2025-07-06 20:25:21.425790 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.425801 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:21.425812 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:21.425823 | orchestrator | 2025-07-06 20:25:21.425835 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:21.425847 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:21.425859 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:21.425871 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:21.425882 | orchestrator | 2025-07-06 20:25:21.425893 | orchestrator | 2025-07-06 20:25:21.425904 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:21.425915 | orchestrator | Sunday 06 July 2025 20:25:08 +0000 (0:00:00.775) 0:00:01.838 *********** 2025-07-06 20:25:21.425926 | orchestrator | =============================================================================== 2025-07-06 20:25:21.425937 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.78s 2025-07-06 20:25:21.425948 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-07-06 20:25:21.425984 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-06 20:25:21.425996 | orchestrator | 2025-07-06 20:25:21.426007 | orchestrator | 2025-07-06 20:25:21.427852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:21.427887 | orchestrator | 2025-07-06 20:25:21.427898 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-06 20:25:21.427910 | orchestrator | Sunday 06 July 2025 20:16:30 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-07-06 20:25:21.427921 | orchestrator | changed: [testbed-manager] 2025-07-06 20:25:21.427932 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.427943 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.427954 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.427965 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.427976 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.427987 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.427997 | orchestrator | 2025-07-06 20:25:21.428008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:21.428019 | orchestrator | Sunday 06 July 2025 20:16:31 +0000 (0:00:00.777) 0:00:01.030 *********** 2025-07-06 20:25:21.428035 | orchestrator | changed: [testbed-manager] 2025-07-06 20:25:21.428053 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.428072 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.428096 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.428123 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.428140 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.428158 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.428176 | orchestrator | 2025-07-06 20:25:21.428235 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:21.428253 | orchestrator | Sunday 06 July 2025 20:16:32 +0000 (0:00:00.610) 0:00:01.640 *********** 2025-07-06 20:25:21.428269 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-06 20:25:21.428286 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-06 20:25:21.428302 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-06 20:25:21.428318 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-06 20:25:21.428334 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-06 20:25:21.428349 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-06 20:25:21.428365 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-06 20:25:21.428381 | orchestrator | 2025-07-06 20:25:21.428398 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-06 20:25:21.428415 | orchestrator | 2025-07-06 20:25:21.428431 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-06 20:25:21.428449 | orchestrator | Sunday 06 July 2025 20:16:32 +0000 (0:00:00.731) 0:00:02.372 *********** 2025-07-06 20:25:21.428467 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.428485 | orchestrator | 2025-07-06 20:25:21.428503 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-06 20:25:21.428521 | orchestrator | Sunday 06 July 2025 20:16:33 +0000 (0:00:00.592) 0:00:02.964 *********** 2025-07-06 20:25:21.428801 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-06 20:25:21.428958 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-06 20:25:21.428977 | orchestrator | 2025-07-06 20:25:21.428988 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-06 20:25:21.428999 | orchestrator | Sunday 06 July 2025 20:16:37 +0000 (0:00:04.098) 0:00:07.063 *********** 2025-07-06 20:25:21.429010 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:25:21.429022 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:25:21.429033 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.429043 | orchestrator | 2025-07-06 20:25:21.429054 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-06 20:25:21.429086 | orchestrator | Sunday 06 July 2025 20:16:41 +0000 (0:00:04.231) 0:00:11.294 *********** 2025-07-06 20:25:21.429097 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.429108 | orchestrator | 2025-07-06 20:25:21.429119 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-06 20:25:21.429130 | orchestrator | Sunday 06 July 2025 20:16:42 +0000 (0:00:00.707) 0:00:12.002 *********** 2025-07-06 20:25:21.429141 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.429152 | orchestrator | 2025-07-06 20:25:21.429163 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-06 20:25:21.429174 | orchestrator | Sunday 06 July 2025 20:16:44 +0000 (0:00:01.396) 0:00:13.399 *********** 2025-07-06 20:25:21.429212 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.429224 | orchestrator | 2025-07-06 20:25:21.429235 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:25:21.429246 | orchestrator | Sunday 06 July 2025 20:16:46 +0000 (0:00:02.849) 0:00:16.248 *********** 2025-07-06 20:25:21.429257 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.429268 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.429279 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.429290 | orchestrator | 2025-07-06 20:25:21.429301 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-06 20:25:21.429336 | orchestrator | Sunday 06 July 2025 20:16:47 +0000 (0:00:00.399) 0:00:16.647 *********** 2025-07-06 20:25:21.429348 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.429359 | orchestrator | 2025-07-06 20:25:21.429371 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-06 20:25:21.429382 | orchestrator | Sunday 06 July 2025 20:17:17 +0000 (0:00:30.158) 0:00:46.806 *********** 2025-07-06 20:25:21.429393 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.429404 | orchestrator | 2025-07-06 20:25:21.429415 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-06 20:25:21.429426 | orchestrator | Sunday 06 July 2025 20:17:31 +0000 (0:00:13.668) 0:01:00.475 *********** 2025-07-06 20:25:21.429437 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.429448 | orchestrator | 2025-07-06 20:25:21.429459 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-06 20:25:21.429470 | orchestrator | Sunday 06 July 2025 20:17:41 +0000 (0:00:10.884) 0:01:11.360 *********** 2025-07-06 20:25:21.429480 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.429491 | orchestrator | 2025-07-06 20:25:21.429502 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-06 20:25:21.429513 | orchestrator | Sunday 06 July 2025 20:17:43 +0000 (0:00:01.555) 0:01:12.915 *********** 2025-07-06 20:25:21.429524 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.429535 | orchestrator | 2025-07-06 20:25:21.429546 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:25:21.429559 | orchestrator | Sunday 06 July 2025 20:17:43 +0000 (0:00:00.398) 0:01:13.314 *********** 2025-07-06 20:25:21.429572 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.429584 | orchestrator | 2025-07-06 20:25:21.429752 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-06 20:25:21.429766 | orchestrator | Sunday 06 July 2025 20:17:44 +0000 (0:00:00.408) 0:01:13.722 *********** 2025-07-06 20:25:21.429779 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.429791 | orchestrator | 2025-07-06 20:25:21.429803 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-06 20:25:21.429815 | orchestrator | Sunday 06 July 2025 20:18:01 +0000 (0:00:17.489) 0:01:31.212 *********** 2025-07-06 20:25:21.429828 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.429840 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.429852 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.429864 | orchestrator | 2025-07-06 20:25:21.429877 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-06 20:25:21.429899 | orchestrator | 2025-07-06 20:25:21.429911 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-06 20:25:21.429922 | orchestrator | Sunday 06 July 2025 20:18:02 +0000 (0:00:00.324) 0:01:31.536 *********** 2025-07-06 20:25:21.429933 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.429944 | orchestrator | 2025-07-06 20:25:21.429955 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-06 20:25:21.429966 | orchestrator | Sunday 06 July 2025 20:18:02 +0000 (0:00:00.572) 0:01:32.108 *********** 2025-07-06 20:25:21.430174 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430246 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430258 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.430269 | orchestrator | 2025-07-06 20:25:21.430280 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-06 20:25:21.430291 | orchestrator | Sunday 06 July 2025 20:18:04 +0000 (0:00:02.230) 0:01:34.339 *********** 2025-07-06 20:25:21.430302 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430313 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430324 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.430335 | orchestrator | 2025-07-06 20:25:21.430346 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-06 20:25:21.430357 | orchestrator | Sunday 06 July 2025 20:18:07 +0000 (0:00:02.374) 0:01:36.713 *********** 2025-07-06 20:25:21.430368 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.430378 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430494 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430511 | orchestrator | 2025-07-06 20:25:21.430522 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-06 20:25:21.430533 | orchestrator | Sunday 06 July 2025 20:18:07 +0000 (0:00:00.324) 0:01:37.037 *********** 2025-07-06 20:25:21.430544 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:25:21.430556 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430566 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:25:21.430577 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430588 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-06 20:25:21.430599 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-06 20:25:21.430610 | orchestrator | 2025-07-06 20:25:21.430621 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-06 20:25:21.430631 | orchestrator | Sunday 06 July 2025 20:18:16 +0000 (0:00:08.941) 0:01:45.979 *********** 2025-07-06 20:25:21.430642 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.430653 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430664 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430675 | orchestrator | 2025-07-06 20:25:21.430685 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-06 20:25:21.430696 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.521) 0:01:46.500 *********** 2025-07-06 20:25:21.430707 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-06 20:25:21.430718 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.430729 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:25:21.430740 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430750 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:25:21.430761 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430772 | orchestrator | 2025-07-06 20:25:21.430782 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-06 20:25:21.430791 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.756) 0:01:47.257 *********** 2025-07-06 20:25:21.430801 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.430811 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430820 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430840 | orchestrator | 2025-07-06 20:25:21.430850 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-06 20:25:21.430860 | orchestrator | Sunday 06 July 2025 20:18:18 +0000 (0:00:00.673) 0:01:47.931 *********** 2025-07-06 20:25:21.430869 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430879 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430889 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.430898 | orchestrator | 2025-07-06 20:25:21.430908 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-06 20:25:21.430918 | orchestrator | Sunday 06 July 2025 20:18:19 +0000 (0:00:01.000) 0:01:48.931 *********** 2025-07-06 20:25:21.430928 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430937 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.430947 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.430956 | orchestrator | 2025-07-06 20:25:21.430966 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-06 20:25:21.430976 | orchestrator | Sunday 06 July 2025 20:18:21 +0000 (0:00:02.171) 0:01:51.102 *********** 2025-07-06 20:25:21.430986 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.430995 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.431005 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.431014 | orchestrator | 2025-07-06 20:25:21.431024 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-06 20:25:21.431034 | orchestrator | Sunday 06 July 2025 20:18:42 +0000 (0:00:21.026) 0:02:12.129 *********** 2025-07-06 20:25:21.431043 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.431053 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.431063 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.431072 | orchestrator | 2025-07-06 20:25:21.431082 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-06 20:25:21.431092 | orchestrator | Sunday 06 July 2025 20:18:56 +0000 (0:00:13.506) 0:02:25.635 *********** 2025-07-06 20:25:21.431102 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.431113 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.431124 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.431135 | orchestrator | 2025-07-06 20:25:21.431146 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-06 20:25:21.431157 | orchestrator | Sunday 06 July 2025 20:18:57 +0000 (0:00:01.260) 0:02:26.895 *********** 2025-07-06 20:25:21.431168 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.431199 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.431211 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.431222 | orchestrator | 2025-07-06 20:25:21.431233 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-06 20:25:21.431244 | orchestrator | Sunday 06 July 2025 20:19:09 +0000 (0:00:11.723) 0:02:38.619 *********** 2025-07-06 20:25:21.431255 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.431266 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.431276 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.431287 | orchestrator | 2025-07-06 20:25:21.431298 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-06 20:25:21.431309 | orchestrator | Sunday 06 July 2025 20:19:10 +0000 (0:00:01.390) 0:02:40.009 *********** 2025-07-06 20:25:21.431320 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.431331 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.431340 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.431350 | orchestrator | 2025-07-06 20:25:21.431360 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-06 20:25:21.431369 | orchestrator | 2025-07-06 20:25:21.431379 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:25:21.431389 | orchestrator | Sunday 06 July 2025 20:19:10 +0000 (0:00:00.312) 0:02:40.322 *********** 2025-07-06 20:25:21.431399 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.431415 | orchestrator | 2025-07-06 20:25:21.431498 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-06 20:25:21.431513 | orchestrator | Sunday 06 July 2025 20:19:11 +0000 (0:00:00.533) 0:02:40.856 *********** 2025-07-06 20:25:21.431522 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-06 20:25:21.431532 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-06 20:25:21.431542 | orchestrator | 2025-07-06 20:25:21.431552 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-06 20:25:21.431561 | orchestrator | Sunday 06 July 2025 20:19:14 +0000 (0:00:03.124) 0:02:43.980 *********** 2025-07-06 20:25:21.431571 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-06 20:25:21.431583 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-06 20:25:21.431592 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-06 20:25:21.431602 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-06 20:25:21.431612 | orchestrator | 2025-07-06 20:25:21.431622 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-06 20:25:21.431631 | orchestrator | Sunday 06 July 2025 20:19:21 +0000 (0:00:06.667) 0:02:50.648 *********** 2025-07-06 20:25:21.431641 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:25:21.431650 | orchestrator | 2025-07-06 20:25:21.431660 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-06 20:25:21.431670 | orchestrator | Sunday 06 July 2025 20:19:24 +0000 (0:00:03.226) 0:02:53.875 *********** 2025-07-06 20:25:21.431679 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:25:21.431689 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-06 20:25:21.431699 | orchestrator | 2025-07-06 20:25:21.431709 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-06 20:25:21.431718 | orchestrator | Sunday 06 July 2025 20:19:28 +0000 (0:00:03.872) 0:02:57.747 *********** 2025-07-06 20:25:21.431728 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:25:21.431738 | orchestrator | 2025-07-06 20:25:21.431747 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-06 20:25:21.431757 | orchestrator | Sunday 06 July 2025 20:19:31 +0000 (0:00:03.370) 0:03:01.118 *********** 2025-07-06 20:25:21.431767 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-06 20:25:21.431776 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-06 20:25:21.431786 | orchestrator | 2025-07-06 20:25:21.431796 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-06 20:25:21.431805 | orchestrator | Sunday 06 July 2025 20:19:39 +0000 (0:00:07.360) 0:03:08.479 *********** 2025-07-06 20:25:21.431820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.431929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.431947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.431959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.431970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.431988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.431998 | orchestrator | 2025-07-06 20:25:21.432008 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-06 20:25:21.432018 | orchestrator | Sunday 06 July 2025 20:19:40 +0000 (0:00:01.576) 0:03:10.055 *********** 2025-07-06 20:25:21.432028 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.432037 | orchestrator | 2025-07-06 20:25:21.432047 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-06 20:25:21.432057 | orchestrator | Sunday 06 July 2025 20:19:40 +0000 (0:00:00.120) 0:03:10.176 *********** 2025-07-06 20:25:21.432067 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.432076 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.432086 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.432096 | orchestrator | 2025-07-06 20:25:21.432105 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-06 20:25:21.432142 | orchestrator | Sunday 06 July 2025 20:19:41 +0000 (0:00:00.823) 0:03:10.999 *********** 2025-07-06 20:25:21.432154 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:25:21.432239 | orchestrator | 2025-07-06 20:25:21.432252 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-06 20:25:21.432262 | orchestrator | Sunday 06 July 2025 20:19:43 +0000 (0:00:01.536) 0:03:12.536 *********** 2025-07-06 20:25:21.432272 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.432281 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.432291 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.432301 | orchestrator | 2025-07-06 20:25:21.432310 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:25:21.432320 | orchestrator | Sunday 06 July 2025 20:19:43 +0000 (0:00:00.439) 0:03:12.975 *********** 2025-07-06 20:25:21.432330 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.432340 | orchestrator | 2025-07-06 20:25:21.432349 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-06 20:25:21.432359 | orchestrator | Sunday 06 July 2025 20:19:44 +0000 (0:00:01.093) 0:03:14.069 *********** 2025-07-06 20:25:21.432370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.432395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.432442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.432456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.432466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.432476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.432490 | orchestrator | 2025-07-06 20:25:21.432498 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-06 20:25:21.432506 | orchestrator | Sunday 06 July 2025 20:19:47 +0000 (0:00:03.236) 0:03:17.306 *********** 2025-07-06 20:25:21.432515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.432546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.432561 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.432575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.432590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.432613 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.432628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.432643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.432656 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.432670 | orchestrator | 2025-07-06 20:25:21.432685 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-06 20:25:21.432730 | orchestrator | Sunday 06 July 2025 20:19:48 +0000 (0:00:00.950) 0:03:18.256 *********** 2025-07-06 20:25:21.432745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.432760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.432782 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.432798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.432814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.432829 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.432884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.432901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.432923 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.432936 | orchestrator | 2025-07-06 20:25:21.432948 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-06 20:25:21.432960 | orchestrator | Sunday 06 July 2025 20:19:49 +0000 (0:00:00.954) 0:03:19.211 *********** 2025-07-06 20:25:21.432972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433098 | orchestrator | 2025-07-06 20:25:21.433110 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-06 20:25:21.433124 | orchestrator | Sunday 06 July 2025 20:19:52 +0000 (0:00:02.494) 0:03:21.705 *********** 2025-07-06 20:25:21.433178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433348 | orchestrator | 2025-07-06 20:25:21.433363 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-06 20:25:21.433377 | orchestrator | Sunday 06 July 2025 20:20:01 +0000 (0:00:09.267) 0:03:30.973 *********** 2025-07-06 20:25:21.433391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.433405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.433420 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.433434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.433492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.433519 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.433535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:25:21.433549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.433563 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.433578 | orchestrator | 2025-07-06 20:25:21.433593 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-06 20:25:21.433607 | orchestrator | Sunday 06 July 2025 20:20:02 +0000 (0:00:00.741) 0:03:31.714 *********** 2025-07-06 20:25:21.433620 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.433631 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.433645 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.433657 | orchestrator | 2025-07-06 20:25:21.433670 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-06 20:25:21.433681 | orchestrator | Sunday 06 July 2025 20:20:04 +0000 (0:00:02.045) 0:03:33.759 *********** 2025-07-06 20:25:21.433692 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.433705 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.433719 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.433732 | orchestrator | 2025-07-06 20:25:21.433745 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-06 20:25:21.433758 | orchestrator | Sunday 06 July 2025 20:20:05 +0000 (0:00:00.912) 0:03:34.672 *********** 2025-07-06 20:25:21.433820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:21.433875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.433999 | orchestrator | 2025-07-06 20:25:21.434007 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-06 20:25:21.434068 | orchestrator | Sunday 06 July 2025 20:20:07 +0000 (0:00:02.625) 0:03:37.297 *********** 2025-07-06 20:25:21.434086 | orchestrator | 2025-07-06 20:25:21.434099 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-06 20:25:21.434114 | orchestrator | Sunday 06 July 2025 20:20:08 +0000 (0:00:00.262) 0:03:37.560 *********** 2025-07-06 20:25:21.434129 | orchestrator | 2025-07-06 20:25:21.434143 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-06 20:25:21.434157 | orchestrator | Sunday 06 July 2025 20:20:08 +0000 (0:00:00.133) 0:03:37.694 *********** 2025-07-06 20:25:21.434170 | orchestrator | 2025-07-06 20:25:21.434206 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-06 20:25:21.434222 | orchestrator | Sunday 06 July 2025 20:20:08 +0000 (0:00:00.211) 0:03:37.905 *********** 2025-07-06 20:25:21.434236 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.434251 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.434264 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.434278 | orchestrator | 2025-07-06 20:25:21.434292 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-06 20:25:21.434306 | orchestrator | Sunday 06 July 2025 20:20:34 +0000 (0:00:26.456) 0:04:04.362 *********** 2025-07-06 20:25:21.434320 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.434334 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.434350 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.434363 | orchestrator | 2025-07-06 20:25:21.434377 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-06 20:25:21.434387 | orchestrator | 2025-07-06 20:25:21.434400 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:25:21.434414 | orchestrator | Sunday 06 July 2025 20:20:45 +0000 (0:00:10.955) 0:04:15.317 *********** 2025-07-06 20:25:21.434427 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.434440 | orchestrator | 2025-07-06 20:25:21.434453 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:25:21.434468 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:01.771) 0:04:17.089 *********** 2025-07-06 20:25:21.434476 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.434484 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.434492 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.434500 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.434508 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.434516 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.434524 | orchestrator | 2025-07-06 20:25:21.434532 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-06 20:25:21.434540 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.854) 0:04:17.944 *********** 2025-07-06 20:25:21.434548 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.434556 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.434564 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.434572 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:25:21.434590 | orchestrator | 2025-07-06 20:25:21.434598 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-06 20:25:21.434606 | orchestrator | Sunday 06 July 2025 20:20:50 +0000 (0:00:02.035) 0:04:19.979 *********** 2025-07-06 20:25:21.434614 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-06 20:25:21.434622 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-06 20:25:21.434630 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-06 20:25:21.434638 | orchestrator | 2025-07-06 20:25:21.434646 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-06 20:25:21.434654 | orchestrator | Sunday 06 July 2025 20:20:51 +0000 (0:00:01.090) 0:04:21.069 *********** 2025-07-06 20:25:21.434662 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-06 20:25:21.434670 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-06 20:25:21.434678 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-06 20:25:21.434686 | orchestrator | 2025-07-06 20:25:21.434694 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-06 20:25:21.434702 | orchestrator | Sunday 06 July 2025 20:20:52 +0000 (0:00:01.263) 0:04:22.333 *********** 2025-07-06 20:25:21.434710 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-06 20:25:21.434718 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.434726 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-06 20:25:21.434734 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.434742 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-06 20:25:21.434750 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.434757 | orchestrator | 2025-07-06 20:25:21.434765 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-06 20:25:21.434773 | orchestrator | Sunday 06 July 2025 20:20:53 +0000 (0:00:00.797) 0:04:23.131 *********** 2025-07-06 20:25:21.434781 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:25:21.434835 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:25:21.434850 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.434863 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:25:21.434876 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:25:21.434888 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.434901 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:25:21.434914 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:25:21.434922 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.434930 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-06 20:25:21.434938 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-06 20:25:21.434946 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-06 20:25:21.434954 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-06 20:25:21.434962 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-06 20:25:21.434970 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-06 20:25:21.434978 | orchestrator | 2025-07-06 20:25:21.434986 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-06 20:25:21.434994 | orchestrator | Sunday 06 July 2025 20:20:54 +0000 (0:00:01.094) 0:04:24.225 *********** 2025-07-06 20:25:21.435002 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.435010 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.435018 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.435034 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.435042 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.435051 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.435059 | orchestrator | 2025-07-06 20:25:21.435067 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-06 20:25:21.435076 | orchestrator | Sunday 06 July 2025 20:20:56 +0000 (0:00:01.723) 0:04:25.949 *********** 2025-07-06 20:25:21.435084 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.435092 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.435116 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.435124 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.435133 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.435141 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.435149 | orchestrator | 2025-07-06 20:25:21.435157 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-06 20:25:21.435165 | orchestrator | Sunday 06 July 2025 20:20:58 +0000 (0:00:02.011) 0:04:27.961 *********** 2025-07-06 20:25:21.435175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435535 | orchestrator | 2025-07-06 20:25:21.435544 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:25:21.435552 | orchestrator | Sunday 06 July 2025 20:21:02 +0000 (0:00:04.140) 0:04:32.101 *********** 2025-07-06 20:25:21.435569 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:21.435578 | orchestrator | 2025-07-06 20:25:21.435586 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-06 20:25:21.435594 | orchestrator | Sunday 06 July 2025 20:21:03 +0000 (0:00:01.025) 0:04:33.127 *********** 2025-07-06 20:25:21.435603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.435779 | orchestrator | 2025-07-06 20:25:21.435786 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-06 20:25:21.435793 | orchestrator | Sunday 06 July 2025 20:21:08 +0000 (0:00:04.341) 0:04:37.468 *********** 2025-07-06 20:25:21.435800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.435830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.435839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.435846 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.435853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.435861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.435868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.435899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.435907 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.435914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.435922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.435929 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.435936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.435943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.435950 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.435957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.435989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.435998 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.436005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.436012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436019 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.436026 | orchestrator | 2025-07-06 20:25:21.436032 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-06 20:25:21.436039 | orchestrator | Sunday 06 July 2025 20:21:10 +0000 (0:00:02.168) 0:04:39.637 *********** 2025-07-06 20:25:21.436046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.436054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.436065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436091 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.436099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.436106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.436114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436121 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.436129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.436140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.436166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436174 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.436204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.436212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436219 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.436226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.436233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436246 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.436253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.436281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.436289 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.436297 | orchestrator | 2025-07-06 20:25:21.436304 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:25:21.436311 | orchestrator | Sunday 06 July 2025 20:21:11 +0000 (0:00:01.699) 0:04:41.336 *********** 2025-07-06 20:25:21.436318 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.436324 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.436331 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.436338 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:25:21.436345 | orchestrator | 2025-07-06 20:25:21.436351 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-06 20:25:21.436358 | orchestrator | Sunday 06 July 2025 20:21:12 +0000 (0:00:00.741) 0:04:42.078 *********** 2025-07-06 20:25:21.436365 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:25:21.436372 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 20:25:21.436379 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 20:25:21.436386 | orchestrator | 2025-07-06 20:25:21.436393 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-06 20:25:21.436399 | orchestrator | Sunday 06 July 2025 20:21:13 +0000 (0:00:00.925) 0:04:43.003 *********** 2025-07-06 20:25:21.436406 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:25:21.436413 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 20:25:21.436419 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 20:25:21.436426 | orchestrator | 2025-07-06 20:25:21.436433 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-06 20:25:21.436439 | orchestrator | Sunday 06 July 2025 20:21:14 +0000 (0:00:01.374) 0:04:44.378 *********** 2025-07-06 20:25:21.436446 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:21.436453 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:21.436460 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:21.436466 | orchestrator | 2025-07-06 20:25:21.436473 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-06 20:25:21.436480 | orchestrator | Sunday 06 July 2025 20:21:15 +0000 (0:00:00.472) 0:04:44.851 *********** 2025-07-06 20:25:21.436491 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:21.436498 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:21.436505 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:21.436511 | orchestrator | 2025-07-06 20:25:21.436518 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-06 20:25:21.436525 | orchestrator | Sunday 06 July 2025 20:21:15 +0000 (0:00:00.448) 0:04:45.299 *********** 2025-07-06 20:25:21.436532 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-06 20:25:21.436539 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-06 20:25:21.436546 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-06 20:25:21.436552 | orchestrator | 2025-07-06 20:25:21.436559 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-06 20:25:21.436566 | orchestrator | Sunday 06 July 2025 20:21:17 +0000 (0:00:01.147) 0:04:46.446 *********** 2025-07-06 20:25:21.436572 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-06 20:25:21.436579 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-06 20:25:21.436586 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-06 20:25:21.436593 | orchestrator | 2025-07-06 20:25:21.436600 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-06 20:25:21.436606 | orchestrator | Sunday 06 July 2025 20:21:18 +0000 (0:00:01.117) 0:04:47.564 *********** 2025-07-06 20:25:21.436613 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-06 20:25:21.436620 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-06 20:25:21.436627 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-06 20:25:21.436634 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-06 20:25:21.436640 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-06 20:25:21.436647 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-06 20:25:21.436654 | orchestrator | 2025-07-06 20:25:21.436660 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-06 20:25:21.436667 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:04.052) 0:04:51.617 *********** 2025-07-06 20:25:21.436674 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.436681 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.436688 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.436695 | orchestrator | 2025-07-06 20:25:21.436701 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-06 20:25:21.436708 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:00.287) 0:04:51.904 *********** 2025-07-06 20:25:21.436715 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.436721 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.436728 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.436735 | orchestrator | 2025-07-06 20:25:21.436742 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-06 20:25:21.436749 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:00.256) 0:04:52.161 *********** 2025-07-06 20:25:21.436755 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.436762 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.436769 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.436776 | orchestrator | 2025-07-06 20:25:21.436783 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-06 20:25:21.436790 | orchestrator | Sunday 06 July 2025 20:21:24 +0000 (0:00:01.569) 0:04:53.730 *********** 2025-07-06 20:25:21.436821 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-06 20:25:21.436830 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-06 20:25:21.436837 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-06 20:25:21.436849 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-06 20:25:21.436856 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-06 20:25:21.436863 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-06 20:25:21.436869 | orchestrator | 2025-07-06 20:25:21.436876 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-06 20:25:21.436883 | orchestrator | Sunday 06 July 2025 20:21:27 +0000 (0:00:03.252) 0:04:56.983 *********** 2025-07-06 20:25:21.436890 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:25:21.436896 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:25:21.436903 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:25:21.436910 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:25:21.436917 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.436924 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:25:21.436930 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.436937 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:25:21.436944 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.436951 | orchestrator | 2025-07-06 20:25:21.436958 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-06 20:25:21.436965 | orchestrator | Sunday 06 July 2025 20:21:31 +0000 (0:00:03.692) 0:05:00.676 *********** 2025-07-06 20:25:21.436971 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.436978 | orchestrator | 2025-07-06 20:25:21.436985 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-06 20:25:21.436992 | orchestrator | Sunday 06 July 2025 20:21:31 +0000 (0:00:00.133) 0:05:00.809 *********** 2025-07-06 20:25:21.436999 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.437005 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.437012 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.437019 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.437025 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.437032 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.437039 | orchestrator | 2025-07-06 20:25:21.437046 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-06 20:25:21.437053 | orchestrator | Sunday 06 July 2025 20:21:32 +0000 (0:00:00.860) 0:05:01.669 *********** 2025-07-06 20:25:21.437059 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:25:21.437066 | orchestrator | 2025-07-06 20:25:21.437073 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-06 20:25:21.437080 | orchestrator | Sunday 06 July 2025 20:21:32 +0000 (0:00:00.706) 0:05:02.375 *********** 2025-07-06 20:25:21.437087 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.437094 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.437100 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.437107 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.437114 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.437120 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.437127 | orchestrator | 2025-07-06 20:25:21.437134 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-06 20:25:21.437141 | orchestrator | Sunday 06 July 2025 20:21:33 +0000 (0:00:00.504) 0:05:02.880 *********** 2025-07-06 20:25:21.437148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437314 | orchestrator | 2025-07-06 20:25:21.437321 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-06 20:25:21.437328 | orchestrator | Sunday 06 July 2025 20:21:37 +0000 (0:00:03.758) 0:05:06.638 *********** 2025-07-06 20:25:21.437335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.437343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.437386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.437395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.437407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.437415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.437423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437430 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.437506 | orchestrator | 2025-07-06 20:25:21.437513 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-06 20:25:21.437520 | orchestrator | Sunday 06 July 2025 20:21:42 +0000 (0:00:05.513) 0:05:12.151 *********** 2025-07-06 20:25:21.437527 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.437533 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.437540 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.437547 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.437554 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.437560 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.437567 | orchestrator | 2025-07-06 20:25:21.437574 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-06 20:25:21.437581 | orchestrator | Sunday 06 July 2025 20:21:44 +0000 (0:00:01.396) 0:05:13.548 *********** 2025-07-06 20:25:21.437588 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-06 20:25:21.437598 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-06 20:25:21.437605 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-06 20:25:21.437612 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-06 20:25:21.437618 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-06 20:25:21.437625 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-06 20:25:21.437632 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-06 20:25:21.437639 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.437646 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-06 20:25:21.437653 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.437659 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-06 20:25:21.437666 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.437673 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-06 20:25:21.437680 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-06 20:25:21.437687 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-06 20:25:21.437694 | orchestrator | 2025-07-06 20:25:21.437701 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-06 20:25:21.437708 | orchestrator | Sunday 06 July 2025 20:21:47 +0000 (0:00:03.490) 0:05:17.038 *********** 2025-07-06 20:25:21.437719 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.437725 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.437732 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.437739 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.437746 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.437752 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.437759 | orchestrator | 2025-07-06 20:25:21.437766 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-06 20:25:21.437773 | orchestrator | Sunday 06 July 2025 20:21:48 +0000 (0:00:00.620) 0:05:17.659 *********** 2025-07-06 20:25:21.437780 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-06 20:25:21.437787 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-06 20:25:21.437794 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-06 20:25:21.437801 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-06 20:25:21.437808 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-06 20:25:21.437815 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-06 20:25:21.437821 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-06 20:25:21.437828 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-06 20:25:21.437836 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-06 20:25:21.437843 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-06 20:25:21.437849 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.437856 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-06 20:25:21.437863 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.437870 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-06 20:25:21.437876 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.437883 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:25:21.437890 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:25:21.437896 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:25:21.437903 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:25:21.437910 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:25:21.437917 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:25:21.437923 | orchestrator | 2025-07-06 20:25:21.437930 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-06 20:25:21.437941 | orchestrator | Sunday 06 July 2025 20:21:53 +0000 (0:00:04.796) 0:05:22.455 *********** 2025-07-06 20:25:21.437948 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:25:21.437955 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:25:21.437961 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:25:21.437973 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:25:21.437979 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:25:21.437986 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-06 20:25:21.437993 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:25:21.438000 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-06 20:25:21.438006 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-06 20:25:21.438013 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:25:21.438064 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:25:21.438071 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:25:21.438078 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:25:21.438085 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:25:21.438091 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:25:21.438098 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-06 20:25:21.438105 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438112 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-06 20:25:21.438119 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438126 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-06 20:25:21.438133 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438140 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:25:21.438146 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:25:21.438153 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:25:21.438160 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:25:21.438167 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:25:21.438174 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:25:21.438222 | orchestrator | 2025-07-06 20:25:21.438231 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-06 20:25:21.438238 | orchestrator | Sunday 06 July 2025 20:22:01 +0000 (0:00:07.964) 0:05:30.419 *********** 2025-07-06 20:25:21.438245 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.438252 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.438258 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.438265 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438272 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438278 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438285 | orchestrator | 2025-07-06 20:25:21.438292 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-06 20:25:21.438299 | orchestrator | Sunday 06 July 2025 20:22:01 +0000 (0:00:00.769) 0:05:31.189 *********** 2025-07-06 20:25:21.438306 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.438313 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.438319 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.438326 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438332 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438339 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438351 | orchestrator | 2025-07-06 20:25:21.438358 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-06 20:25:21.438365 | orchestrator | Sunday 06 July 2025 20:22:02 +0000 (0:00:00.753) 0:05:31.942 *********** 2025-07-06 20:25:21.438372 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438378 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438385 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438392 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.438398 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.438405 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.438412 | orchestrator | 2025-07-06 20:25:21.438418 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-06 20:25:21.438425 | orchestrator | Sunday 06 July 2025 20:22:04 +0000 (0:00:02.386) 0:05:34.328 *********** 2025-07-06 20:25:21.438439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.438447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.438454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.438461 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.438469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.438481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.438491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.438499 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.438506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:25:21.438513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:25:21.438520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.438532 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.438539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.438546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.438553 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.438573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.438580 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:25:21.438595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:21.438602 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438613 | orchestrator | 2025-07-06 20:25:21.438620 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-06 20:25:21.438627 | orchestrator | Sunday 06 July 2025 20:22:06 +0000 (0:00:01.556) 0:05:35.884 *********** 2025-07-06 20:25:21.438634 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-06 20:25:21.438640 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-06 20:25:21.438647 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.438654 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-06 20:25:21.438661 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-06 20:25:21.438667 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.438674 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-06 20:25:21.438681 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-06 20:25:21.438688 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.438694 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-06 20:25:21.438701 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-06 20:25:21.438708 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438714 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-06 20:25:21.438721 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-06 20:25:21.438728 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438735 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-06 20:25:21.438742 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-06 20:25:21.438749 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438755 | orchestrator | 2025-07-06 20:25:21.438761 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-06 20:25:21.438767 | orchestrator | Sunday 06 July 2025 20:22:07 +0000 (0:00:00.518) 0:05:36.403 *********** 2025-07-06 20:25:21.438779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:21.438899 | orchestrator | 2025-07-06 20:25:21.438906 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:25:21.438912 | orchestrator | Sunday 06 July 2025 20:22:09 +0000 (0:00:02.479) 0:05:38.882 *********** 2025-07-06 20:25:21.438918 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.438925 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.438931 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.438938 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.438944 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.438950 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.438956 | orchestrator | 2025-07-06 20:25:21.438963 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:25:21.438969 | orchestrator | Sunday 06 July 2025 20:22:09 +0000 (0:00:00.503) 0:05:39.386 *********** 2025-07-06 20:25:21.438975 | orchestrator | 2025-07-06 20:25:21.438981 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:25:21.438987 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:00.232) 0:05:39.619 *********** 2025-07-06 20:25:21.438994 | orchestrator | 2025-07-06 20:25:21.439000 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:25:21.439006 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:00.123) 0:05:39.742 *********** 2025-07-06 20:25:21.439012 | orchestrator | 2025-07-06 20:25:21.439018 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:25:21.439024 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:00.120) 0:05:39.863 *********** 2025-07-06 20:25:21.439031 | orchestrator | 2025-07-06 20:25:21.439037 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:25:21.439043 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:00.120) 0:05:39.983 *********** 2025-07-06 20:25:21.439049 | orchestrator | 2025-07-06 20:25:21.439055 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:25:21.439061 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:00.128) 0:05:40.112 *********** 2025-07-06 20:25:21.439068 | orchestrator | 2025-07-06 20:25:21.439074 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-06 20:25:21.439080 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:00.114) 0:05:40.226 *********** 2025-07-06 20:25:21.439086 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.439093 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.439100 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.439106 | orchestrator | 2025-07-06 20:25:21.439112 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-06 20:25:21.439118 | orchestrator | Sunday 06 July 2025 20:22:22 +0000 (0:00:11.607) 0:05:51.833 *********** 2025-07-06 20:25:21.439125 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.439131 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.439137 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.439143 | orchestrator | 2025-07-06 20:25:21.439150 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-06 20:25:21.439156 | orchestrator | Sunday 06 July 2025 20:22:41 +0000 (0:00:19.114) 0:06:10.947 *********** 2025-07-06 20:25:21.439162 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.439168 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.439194 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.439202 | orchestrator | 2025-07-06 20:25:21.439208 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-06 20:25:21.439214 | orchestrator | Sunday 06 July 2025 20:23:06 +0000 (0:00:25.210) 0:06:36.158 *********** 2025-07-06 20:25:21.439224 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.439230 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.439237 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.439243 | orchestrator | 2025-07-06 20:25:21.439249 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-06 20:25:21.439255 | orchestrator | Sunday 06 July 2025 20:23:41 +0000 (0:00:35.120) 0:07:11.278 *********** 2025-07-06 20:25:21.439261 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-07-06 20:25:21.439268 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-07-06 20:25:21.439274 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-07-06 20:25:21.439280 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.439286 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.439292 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.439299 | orchestrator | 2025-07-06 20:25:21.439305 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-06 20:25:21.439311 | orchestrator | Sunday 06 July 2025 20:23:48 +0000 (0:00:06.454) 0:07:17.733 *********** 2025-07-06 20:25:21.439317 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.439323 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.439329 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.439335 | orchestrator | 2025-07-06 20:25:21.439341 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-06 20:25:21.439348 | orchestrator | Sunday 06 July 2025 20:23:49 +0000 (0:00:00.780) 0:07:18.514 *********** 2025-07-06 20:25:21.439354 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:21.439360 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:21.439366 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:21.439372 | orchestrator | 2025-07-06 20:25:21.439378 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-06 20:25:21.439385 | orchestrator | Sunday 06 July 2025 20:24:10 +0000 (0:00:21.066) 0:07:39.580 *********** 2025-07-06 20:25:21.439391 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.439397 | orchestrator | 2025-07-06 20:25:21.439403 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-06 20:25:21.439410 | orchestrator | Sunday 06 July 2025 20:24:10 +0000 (0:00:00.136) 0:07:39.717 *********** 2025-07-06 20:25:21.439416 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.439422 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.439428 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.439434 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.439440 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.439447 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-06 20:25:21.439453 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:25:21.439459 | orchestrator | 2025-07-06 20:25:21.439465 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-06 20:25:21.439471 | orchestrator | Sunday 06 July 2025 20:24:33 +0000 (0:00:22.747) 0:08:02.464 *********** 2025-07-06 20:25:21.439478 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.439484 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.439490 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.439496 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.439502 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.439508 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.439518 | orchestrator | 2025-07-06 20:25:21.439525 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-06 20:25:21.439531 | orchestrator | Sunday 06 July 2025 20:24:42 +0000 (0:00:09.020) 0:08:11.485 *********** 2025-07-06 20:25:21.439537 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.439543 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.439550 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.439556 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.439562 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.439568 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-07-06 20:25:21.439574 | orchestrator | 2025-07-06 20:25:21.439581 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-06 20:25:21.439587 | orchestrator | Sunday 06 July 2025 20:24:46 +0000 (0:00:04.645) 0:08:16.131 *********** 2025-07-06 20:25:21.439593 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:25:21.439599 | orchestrator | 2025-07-06 20:25:21.439606 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-06 20:25:21.439612 | orchestrator | Sunday 06 July 2025 20:24:59 +0000 (0:00:12.696) 0:08:28.827 *********** 2025-07-06 20:25:21.439618 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:25:21.439625 | orchestrator | 2025-07-06 20:25:21.439631 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-06 20:25:21.439637 | orchestrator | Sunday 06 July 2025 20:25:00 +0000 (0:00:01.324) 0:08:30.152 *********** 2025-07-06 20:25:21.439643 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.439649 | orchestrator | 2025-07-06 20:25:21.439655 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-06 20:25:21.439662 | orchestrator | Sunday 06 July 2025 20:25:02 +0000 (0:00:01.401) 0:08:31.553 *********** 2025-07-06 20:25:21.439668 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:25:21.439674 | orchestrator | 2025-07-06 20:25:21.439680 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-06 20:25:21.439686 | orchestrator | Sunday 06 July 2025 20:25:12 +0000 (0:00:10.568) 0:08:42.122 *********** 2025-07-06 20:25:21.439693 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:21.439699 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:21.439705 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:21.439712 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:21.439718 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:21.439724 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:21.439730 | orchestrator | 2025-07-06 20:25:21.439740 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-06 20:25:21.439746 | orchestrator | 2025-07-06 20:25:21.439752 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-06 20:25:21.439759 | orchestrator | Sunday 06 July 2025 20:25:14 +0000 (0:00:01.685) 0:08:43.808 *********** 2025-07-06 20:25:21.439765 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:21.439771 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:21.439777 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:21.439783 | orchestrator | 2025-07-06 20:25:21.439790 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-06 20:25:21.439796 | orchestrator | 2025-07-06 20:25:21.439802 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-06 20:25:21.439808 | orchestrator | Sunday 06 July 2025 20:25:15 +0000 (0:00:01.066) 0:08:44.874 *********** 2025-07-06 20:25:21.439815 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.439821 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.439827 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.439833 | orchestrator | 2025-07-06 20:25:21.439839 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-06 20:25:21.439846 | orchestrator | 2025-07-06 20:25:21.439852 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-06 20:25:21.439862 | orchestrator | Sunday 06 July 2025 20:25:15 +0000 (0:00:00.498) 0:08:45.372 *********** 2025-07-06 20:25:21.439869 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-06 20:25:21.439875 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-06 20:25:21.439881 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-06 20:25:21.439887 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-06 20:25:21.439894 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-06 20:25:21.439900 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-06 20:25:21.439906 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:21.439912 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-06 20:25:21.439919 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-06 20:25:21.439925 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-06 20:25:21.439931 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-06 20:25:21.439937 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-06 20:25:21.439944 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-06 20:25:21.439950 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:21.439956 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-06 20:25:21.439962 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-06 20:25:21.439968 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-06 20:25:21.439975 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-06 20:25:21.439981 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-06 20:25:21.439987 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-06 20:25:21.439993 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:21.439999 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-06 20:25:21.440005 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-06 20:25:21.440012 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-06 20:25:21.440018 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-06 20:25:21.440024 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-06 20:25:21.440031 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-06 20:25:21.440037 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.440043 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-06 20:25:21.440049 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-06 20:25:21.440056 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-06 20:25:21.440062 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-06 20:25:21.440068 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-06 20:25:21.440074 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-06 20:25:21.440080 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.440086 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-06 20:25:21.440093 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-06 20:25:21.440099 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-06 20:25:21.440106 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-06 20:25:21.440112 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-06 20:25:21.440118 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-06 20:25:21.440124 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.440130 | orchestrator | 2025-07-06 20:25:21.440136 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-06 20:25:21.440146 | orchestrator | 2025-07-06 20:25:21.440153 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-06 20:25:21.440159 | orchestrator | Sunday 06 July 2025 20:25:17 +0000 (0:00:01.240) 0:08:46.613 *********** 2025-07-06 20:25:21.440165 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-06 20:25:21.440171 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-06 20:25:21.440177 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.440216 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-06 20:25:21.440223 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-06 20:25:21.440234 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.440240 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-06 20:25:21.440246 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-06 20:25:21.440253 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.440259 | orchestrator | 2025-07-06 20:25:21.440265 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-06 20:25:21.440271 | orchestrator | 2025-07-06 20:25:21.440278 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-06 20:25:21.440284 | orchestrator | Sunday 06 July 2025 20:25:17 +0000 (0:00:00.693) 0:08:47.307 *********** 2025-07-06 20:25:21.440290 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.440296 | orchestrator | 2025-07-06 20:25:21.440302 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-06 20:25:21.440308 | orchestrator | 2025-07-06 20:25:21.440315 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-06 20:25:21.440321 | orchestrator | Sunday 06 July 2025 20:25:18 +0000 (0:00:00.672) 0:08:47.980 *********** 2025-07-06 20:25:21.440327 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:21.440333 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:21.440339 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:21.440346 | orchestrator | 2025-07-06 20:25:21.440352 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:21.440358 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:21.440365 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-06 20:25:21.440371 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-06 20:25:21.440377 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-06 20:25:21.440384 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-06 20:25:21.440390 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-06 20:25:21.440396 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-06 20:25:21.440402 | orchestrator | 2025-07-06 20:25:21.440409 | orchestrator | 2025-07-06 20:25:21.440415 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:21.440421 | orchestrator | Sunday 06 July 2025 20:25:18 +0000 (0:00:00.407) 0:08:48.387 *********** 2025-07-06 20:25:21.440427 | orchestrator | =============================================================================== 2025-07-06 20:25:21.440434 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.12s 2025-07-06 20:25:21.440440 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.16s 2025-07-06 20:25:21.440453 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 26.46s 2025-07-06 20:25:21.440459 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.21s 2025-07-06 20:25:21.440466 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.75s 2025-07-06 20:25:21.440472 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.07s 2025-07-06 20:25:21.440478 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.03s 2025-07-06 20:25:21.440484 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.11s 2025-07-06 20:25:21.440490 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.49s 2025-07-06 20:25:21.440496 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.67s 2025-07-06 20:25:21.440503 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.51s 2025-07-06 20:25:21.440509 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.70s 2025-07-06 20:25:21.440515 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.72s 2025-07-06 20:25:21.440521 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.61s 2025-07-06 20:25:21.440527 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.96s 2025-07-06 20:25:21.440533 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.88s 2025-07-06 20:25:21.440540 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.57s 2025-07-06 20:25:21.440546 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.27s 2025-07-06 20:25:21.440552 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.02s 2025-07-06 20:25:21.440558 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.94s 2025-07-06 20:25:21.440564 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:21.440570 | orchestrator | 2025-07-06 20:25:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:24.468842 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:24.470525 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:24.472152 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:24.472306 | orchestrator | 2025-07-06 20:25:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:27.520658 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:27.522296 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:27.524263 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:27.524303 | orchestrator | 2025-07-06 20:25:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:30.569460 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:30.572301 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:30.574396 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:30.574429 | orchestrator | 2025-07-06 20:25:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:33.623918 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:33.625748 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:33.627407 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:33.627463 | orchestrator | 2025-07-06 20:25:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:36.667372 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:36.669167 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:36.672016 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:36.672050 | orchestrator | 2025-07-06 20:25:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:39.716795 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:39.718592 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:39.719770 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:39.719801 | orchestrator | 2025-07-06 20:25:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:42.765029 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:42.767340 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:42.770475 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:42.770569 | orchestrator | 2025-07-06 20:25:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:45.818334 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:45.820684 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:45.821848 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:45.821912 | orchestrator | 2025-07-06 20:25:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:48.864399 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:48.867400 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:48.868796 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:48.868840 | orchestrator | 2025-07-06 20:25:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:51.916605 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:51.919773 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:51.923155 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:51.923230 | orchestrator | 2025-07-06 20:25:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:54.972775 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:54.974813 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:54.978639 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:54.978665 | orchestrator | 2025-07-06 20:25:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:58.023794 | orchestrator | 2025-07-06 20:25:58 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:25:58.026423 | orchestrator | 2025-07-06 20:25:58 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:25:58.028126 | orchestrator | 2025-07-06 20:25:58 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:25:58.028340 | orchestrator | 2025-07-06 20:25:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:01.072281 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:01.075346 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:01.076856 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:01.076890 | orchestrator | 2025-07-06 20:26:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:04.125964 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:04.127850 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:04.129300 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:04.129634 | orchestrator | 2025-07-06 20:26:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:07.171511 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:07.172537 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:07.173851 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:07.173880 | orchestrator | 2025-07-06 20:26:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:10.228774 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:10.230511 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:10.232184 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:10.232756 | orchestrator | 2025-07-06 20:26:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:13.279439 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:13.280920 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:13.281976 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:13.281999 | orchestrator | 2025-07-06 20:26:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:16.324567 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:16.327035 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:16.329690 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:16.329741 | orchestrator | 2025-07-06 20:26:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:19.366305 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:19.368134 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:19.371514 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:19.371555 | orchestrator | 2025-07-06 20:26:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:22.441132 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:22.442291 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:22.443199 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:22.443278 | orchestrator | 2025-07-06 20:26:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:25.495414 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:25.495498 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:25.496437 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:25.496653 | orchestrator | 2025-07-06 20:26:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:28.542395 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:28.543447 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:28.544900 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:28.544930 | orchestrator | 2025-07-06 20:26:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:31.584763 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:31.585898 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:31.587279 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:31.587307 | orchestrator | 2025-07-06 20:26:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:34.636062 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state STARTED 2025-07-06 20:26:34.637871 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:34.639715 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:34.639765 | orchestrator | 2025-07-06 20:26:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:37.688566 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task ad1003eb-737a-4978-a3c1-dd91672bdd5f is in state SUCCESS 2025-07-06 20:26:37.690395 | orchestrator | 2025-07-06 20:26:37.690551 | orchestrator | 2025-07-06 20:26:37.691255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:26:37.691273 | orchestrator | 2025-07-06 20:26:37.691287 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:26:37.691328 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.616) 0:00:00.616 *********** 2025-07-06 20:26:37.691342 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:26:37.691356 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:26:37.691369 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:26:37.691382 | orchestrator | 2025-07-06 20:26:37.691396 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:26:37.691410 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.598) 0:00:01.215 *********** 2025-07-06 20:26:37.691423 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-06 20:26:37.691437 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-06 20:26:37.691449 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-06 20:26:37.691462 | orchestrator | 2025-07-06 20:26:37.691475 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-06 20:26:37.691487 | orchestrator | 2025-07-06 20:26:37.691500 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-06 20:26:37.691512 | orchestrator | Sunday 06 July 2025 20:24:13 +0000 (0:00:00.901) 0:00:02.117 *********** 2025-07-06 20:26:37.691525 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:26:37.691540 | orchestrator | 2025-07-06 20:26:37.691552 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-06 20:26:37.691565 | orchestrator | Sunday 06 July 2025 20:24:14 +0000 (0:00:01.322) 0:00:03.439 *********** 2025-07-06 20:26:37.691582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.691599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.691612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.691626 | orchestrator | 2025-07-06 20:26:37.691639 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-06 20:26:37.691652 | orchestrator | Sunday 06 July 2025 20:24:15 +0000 (0:00:00.819) 0:00:04.259 *********** 2025-07-06 20:26:37.691665 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-06 20:26:37.691688 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-06 20:26:37.691702 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:26:37.691715 | orchestrator | 2025-07-06 20:26:37.691727 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-06 20:26:37.691740 | orchestrator | Sunday 06 July 2025 20:24:16 +0000 (0:00:00.726) 0:00:04.985 *********** 2025-07-06 20:26:37.691753 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:26:37.691766 | orchestrator | 2025-07-06 20:26:37.691778 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-06 20:26:37.691792 | orchestrator | Sunday 06 July 2025 20:24:17 +0000 (0:00:00.554) 0:00:05.540 *********** 2025-07-06 20:26:37.691861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.691878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.691945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.691961 | orchestrator | 2025-07-06 20:26:37.691974 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-06 20:26:37.691987 | orchestrator | Sunday 06 July 2025 20:24:18 +0000 (0:00:01.207) 0:00:06.747 *********** 2025-07-06 20:26:37.692005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:26:37.692019 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.692046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:26:37.692060 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.692110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:26:37.692125 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.692138 | orchestrator | 2025-07-06 20:26:37.692151 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-06 20:26:37.692164 | orchestrator | Sunday 06 July 2025 20:24:18 +0000 (0:00:00.302) 0:00:07.049 *********** 2025-07-06 20:26:37.692177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:26:37.692190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:26:37.692203 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.692216 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.692252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:26:37.692274 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.692286 | orchestrator | 2025-07-06 20:26:37.692299 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-06 20:26:37.692311 | orchestrator | Sunday 06 July 2025 20:24:19 +0000 (0:00:00.640) 0:00:07.690 *********** 2025-07-06 20:26:37.692329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.692375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.692390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.692404 | orchestrator | 2025-07-06 20:26:37.692416 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-06 20:26:37.692429 | orchestrator | Sunday 06 July 2025 20:24:20 +0000 (0:00:01.112) 0:00:08.803 *********** 2025-07-06 20:26:37.692443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.692456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.692482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.692496 | orchestrator | 2025-07-06 20:26:37.692509 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-06 20:26:37.692521 | orchestrator | Sunday 06 July 2025 20:24:21 +0000 (0:00:01.327) 0:00:10.130 *********** 2025-07-06 20:26:37.692534 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.692547 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.692559 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.692572 | orchestrator | 2025-07-06 20:26:37.692585 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-06 20:26:37.692598 | orchestrator | Sunday 06 July 2025 20:24:22 +0000 (0:00:00.393) 0:00:10.523 *********** 2025-07-06 20:26:37.692611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-06 20:26:37.692624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-06 20:26:37.692636 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-06 20:26:37.692648 | orchestrator | 2025-07-06 20:26:37.692661 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-06 20:26:37.692674 | orchestrator | Sunday 06 July 2025 20:24:23 +0000 (0:00:01.220) 0:00:11.744 *********** 2025-07-06 20:26:37.692687 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-06 20:26:37.692729 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-06 20:26:37.692743 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-06 20:26:37.692756 | orchestrator | 2025-07-06 20:26:37.692769 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-06 20:26:37.692781 | orchestrator | Sunday 06 July 2025 20:24:24 +0000 (0:00:01.245) 0:00:12.989 *********** 2025-07-06 20:26:37.692793 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:26:37.692806 | orchestrator | 2025-07-06 20:26:37.692819 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-06 20:26:37.692831 | orchestrator | Sunday 06 July 2025 20:24:25 +0000 (0:00:00.694) 0:00:13.684 *********** 2025-07-06 20:26:37.692844 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-06 20:26:37.692856 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-06 20:26:37.692869 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:26:37.692882 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:26:37.692894 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:26:37.692907 | orchestrator | 2025-07-06 20:26:37.692919 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-06 20:26:37.692932 | orchestrator | Sunday 06 July 2025 20:24:25 +0000 (0:00:00.650) 0:00:14.335 *********** 2025-07-06 20:26:37.692944 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.692956 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.692968 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.692980 | orchestrator | 2025-07-06 20:26:37.692992 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-06 20:26:37.693013 | orchestrator | Sunday 06 July 2025 20:24:26 +0000 (0:00:00.521) 0:00:14.856 *********** 2025-07-06 20:26:37.693027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098245, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8110292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098245, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8110292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098245, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8110292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098227, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.806029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098227, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.806029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098227, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.806029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098218, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8040292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098218, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8040292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098218, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8040292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098237, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8080292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098237, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8080292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098237, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8080292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098199, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8010292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098199, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8010292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098199, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8010292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098220, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8050292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098220, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8050292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098220, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8050292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098234, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098234, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098234, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098194, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.800029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098194, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.800029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098194, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.800029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098162, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.7950292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098162, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.7950292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098162, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.7950292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098204, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8020291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098204, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8020291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098204, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8020291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098175, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.7980292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098175, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.7980292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098175, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.7980292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098230, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098230, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098230, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098208, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.803029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.693999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098208, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.803029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098208, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.803029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098241, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.809029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098241, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.809029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098241, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.809029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098191, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.800029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098191, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.800029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098191, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.800029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098223, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.806029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098223, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.806029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098166, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.797029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098223, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.806029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098166, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.797029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098183, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.799029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098166, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.797029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098183, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.799029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098213, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8040292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098183, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.799029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098213, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8040292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098213, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8040292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098335, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8360295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098335, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8360295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098321, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8260293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098335, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8360295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098321, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8260293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098255, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8120291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098321, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8260293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098255, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8120291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098385, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8410294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098255, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8120291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098385, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8410294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098259, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8130293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098259, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8130293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098385, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8410294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098377, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8390296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098377, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8390296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098259, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8130293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098388, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8440294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098388, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8440294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098377, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8390296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098364, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8370295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098364, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8370295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098388, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8440294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098373, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8390296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098373, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8390296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098364, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8370295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098264, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8140292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098264, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8140292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098373, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8390296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098325, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8270295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098325, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8270295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098264, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8140292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098397, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8440294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098397, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8440294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.694986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098325, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8270295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098381, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8400295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098381, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8400295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098397, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8440294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098273, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8160293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098273, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8160293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098381, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8400295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098269, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8140292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098269, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8140292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098273, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8160293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098288, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8180292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098288, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8180292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098269, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8140292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098295, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8250294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098295, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8250294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098288, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8180292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098329, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8280294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098329, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8280294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098295, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8250294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098370, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8380294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098370, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8380294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098329, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8280294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098332, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8290293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098332, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8290293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098370, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8380294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098399, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8450296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098399, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8450296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098332, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8290293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098399, 'dev': 86, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1751830600.8450296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:26:37.695468 | orchestrator | 2025-07-06 20:26:37.695481 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-06 20:26:37.695494 | orchestrator | Sunday 06 July 2025 20:25:05 +0000 (0:00:39.198) 0:00:54.054 *********** 2025-07-06 20:26:37.695515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.695534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.695548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:26:37.695561 | orchestrator | 2025-07-06 20:26:37.695573 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-06 20:26:37.695586 | orchestrator | Sunday 06 July 2025 20:25:06 +0000 (0:00:01.011) 0:00:55.066 *********** 2025-07-06 20:26:37.695599 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:26:37.695611 | orchestrator | 2025-07-06 20:26:37.695625 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-06 20:26:37.695637 | orchestrator | Sunday 06 July 2025 20:25:08 +0000 (0:00:02.334) 0:00:57.400 *********** 2025-07-06 20:26:37.695650 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:26:37.695663 | orchestrator | 2025-07-06 20:26:37.695681 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-06 20:26:37.695694 | orchestrator | Sunday 06 July 2025 20:25:11 +0000 (0:00:02.254) 0:00:59.655 *********** 2025-07-06 20:26:37.695708 | orchestrator | 2025-07-06 20:26:37.695722 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-06 20:26:37.695735 | orchestrator | Sunday 06 July 2025 20:25:11 +0000 (0:00:00.235) 0:00:59.890 *********** 2025-07-06 20:26:37.695748 | orchestrator | 2025-07-06 20:26:37.695762 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-06 20:26:37.695774 | orchestrator | Sunday 06 July 2025 20:25:11 +0000 (0:00:00.062) 0:00:59.953 *********** 2025-07-06 20:26:37.695786 | orchestrator | 2025-07-06 20:26:37.695798 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-06 20:26:37.695810 | orchestrator | Sunday 06 July 2025 20:25:11 +0000 (0:00:00.062) 0:01:00.015 *********** 2025-07-06 20:26:37.695822 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.695834 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.695845 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:26:37.695857 | orchestrator | 2025-07-06 20:26:37.695869 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-06 20:26:37.695889 | orchestrator | Sunday 06 July 2025 20:25:18 +0000 (0:00:06.925) 0:01:06.940 *********** 2025-07-06 20:26:37.695901 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.695913 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.695926 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-06 20:26:37.695938 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-06 20:26:37.695950 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-06 20:26:37.695962 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:26:37.695974 | orchestrator | 2025-07-06 20:26:37.695986 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-06 20:26:37.695998 | orchestrator | Sunday 06 July 2025 20:25:56 +0000 (0:00:38.475) 0:01:45.416 *********** 2025-07-06 20:26:37.696010 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.696022 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:26:37.696034 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:26:37.696045 | orchestrator | 2025-07-06 20:26:37.696058 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-06 20:26:37.696070 | orchestrator | Sunday 06 July 2025 20:26:31 +0000 (0:00:34.297) 0:02:19.713 *********** 2025-07-06 20:26:37.696082 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:26:37.696094 | orchestrator | 2025-07-06 20:26:37.696106 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-06 20:26:37.696118 | orchestrator | Sunday 06 July 2025 20:26:33 +0000 (0:00:02.308) 0:02:22.021 *********** 2025-07-06 20:26:37.696130 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.696142 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:26:37.696154 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:26:37.696166 | orchestrator | 2025-07-06 20:26:37.696178 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-06 20:26:37.696190 | orchestrator | Sunday 06 July 2025 20:26:33 +0000 (0:00:00.317) 0:02:22.339 *********** 2025-07-06 20:26:37.696202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-06 20:26:37.696220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-06 20:26:37.696293 | orchestrator | 2025-07-06 20:26:37.696307 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-06 20:26:37.696319 | orchestrator | Sunday 06 July 2025 20:26:36 +0000 (0:00:02.448) 0:02:24.787 *********** 2025-07-06 20:26:37.696332 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:26:37.696344 | orchestrator | 2025-07-06 20:26:37.696357 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:26:37.696370 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:26:37.696383 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:26:37.696396 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:26:37.696408 | orchestrator | 2025-07-06 20:26:37.696421 | orchestrator | 2025-07-06 20:26:37.696434 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:26:37.696454 | orchestrator | Sunday 06 July 2025 20:26:36 +0000 (0:00:00.245) 0:02:25.033 *********** 2025-07-06 20:26:37.696466 | orchestrator | =============================================================================== 2025-07-06 20:26:37.696479 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.20s 2025-07-06 20:26:37.696492 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.48s 2025-07-06 20:26:37.696511 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.30s 2025-07-06 20:26:37.696524 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.93s 2025-07-06 20:26:37.696539 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.45s 2025-07-06 20:26:37.696552 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.33s 2025-07-06 20:26:37.696565 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.31s 2025-07-06 20:26:37.696578 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2025-07-06 20:26:37.696591 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.33s 2025-07-06 20:26:37.696605 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.32s 2025-07-06 20:26:37.696619 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.25s 2025-07-06 20:26:37.696633 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2025-07-06 20:26:37.696641 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.21s 2025-07-06 20:26:37.696649 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.11s 2025-07-06 20:26:37.696657 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.01s 2025-07-06 20:26:37.696665 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-07-06 20:26:37.696673 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-07-06 20:26:37.696680 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.73s 2025-07-06 20:26:37.696688 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.69s 2025-07-06 20:26:37.696696 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2025-07-06 20:26:37.696704 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:37.696712 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:37.696720 | orchestrator | 2025-07-06 20:26:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:40.739888 | orchestrator | 2025-07-06 20:26:40 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:40.741519 | orchestrator | 2025-07-06 20:26:40 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:40.741552 | orchestrator | 2025-07-06 20:26:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:43.788806 | orchestrator | 2025-07-06 20:26:43 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:43.790980 | orchestrator | 2025-07-06 20:26:43 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:43.791092 | orchestrator | 2025-07-06 20:26:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:46.831279 | orchestrator | 2025-07-06 20:26:46 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:46.832373 | orchestrator | 2025-07-06 20:26:46 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:46.832409 | orchestrator | 2025-07-06 20:26:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:49.872682 | orchestrator | 2025-07-06 20:26:49 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:49.873134 | orchestrator | 2025-07-06 20:26:49 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:49.873163 | orchestrator | 2025-07-06 20:26:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:52.917627 | orchestrator | 2025-07-06 20:26:52 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:52.918457 | orchestrator | 2025-07-06 20:26:52 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:52.918554 | orchestrator | 2025-07-06 20:26:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:55.957764 | orchestrator | 2025-07-06 20:26:55 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:55.959436 | orchestrator | 2025-07-06 20:26:55 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:55.959471 | orchestrator | 2025-07-06 20:26:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:58.993678 | orchestrator | 2025-07-06 20:26:58 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:26:58.993940 | orchestrator | 2025-07-06 20:26:58 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:26:58.993961 | orchestrator | 2025-07-06 20:26:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:02.034438 | orchestrator | 2025-07-06 20:27:02 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:02.034539 | orchestrator | 2025-07-06 20:27:02 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:27:02.034556 | orchestrator | 2025-07-06 20:27:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:05.076641 | orchestrator | 2025-07-06 20:27:05 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:05.078635 | orchestrator | 2025-07-06 20:27:05 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:27:05.078728 | orchestrator | 2025-07-06 20:27:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:08.120131 | orchestrator | 2025-07-06 20:27:08 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:08.122294 | orchestrator | 2025-07-06 20:27:08 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:27:08.122329 | orchestrator | 2025-07-06 20:27:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:11.163511 | orchestrator | 2025-07-06 20:27:11 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:11.163616 | orchestrator | 2025-07-06 20:27:11 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state STARTED 2025-07-06 20:27:11.163632 | orchestrator | 2025-07-06 20:27:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:14.195343 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:14.196592 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task 2af4b013-9deb-4cbe-9d3c-a8361803bb37 is in state SUCCESS 2025-07-06 20:27:14.196621 | orchestrator | 2025-07-06 20:27:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:17.239097 | orchestrator | 2025-07-06 20:27:17 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:17.239199 | orchestrator | 2025-07-06 20:27:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:20.279590 | orchestrator | 2025-07-06 20:27:20 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:20.279680 | orchestrator | 2025-07-06 20:27:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:23.324959 | orchestrator | 2025-07-06 20:27:23 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:23.325058 | orchestrator | 2025-07-06 20:27:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:26.375341 | orchestrator | 2025-07-06 20:27:26 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:26.375435 | orchestrator | 2025-07-06 20:27:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:29.414929 | orchestrator | 2025-07-06 20:27:29 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:29.414986 | orchestrator | 2025-07-06 20:27:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:32.456811 | orchestrator | 2025-07-06 20:27:32 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:32.456871 | orchestrator | 2025-07-06 20:27:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:35.500702 | orchestrator | 2025-07-06 20:27:35 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:35.500768 | orchestrator | 2025-07-06 20:27:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:38.547895 | orchestrator | 2025-07-06 20:27:38 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:38.547956 | orchestrator | 2025-07-06 20:27:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:41.586197 | orchestrator | 2025-07-06 20:27:41 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:41.586336 | orchestrator | 2025-07-06 20:27:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:44.631452 | orchestrator | 2025-07-06 20:27:44 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:44.631566 | orchestrator | 2025-07-06 20:27:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:47.674226 | orchestrator | 2025-07-06 20:27:47 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:47.674372 | orchestrator | 2025-07-06 20:27:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:50.716644 | orchestrator | 2025-07-06 20:27:50 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:50.716748 | orchestrator | 2025-07-06 20:27:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:53.765941 | orchestrator | 2025-07-06 20:27:53 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:53.766123 | orchestrator | 2025-07-06 20:27:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:56.815067 | orchestrator | 2025-07-06 20:27:56 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:56.818782 | orchestrator | 2025-07-06 20:27:56 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state STARTED 2025-07-06 20:27:56.818860 | orchestrator | 2025-07-06 20:27:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:59.854214 | orchestrator | 2025-07-06 20:27:59 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:27:59.854377 | orchestrator | 2025-07-06 20:27:59 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state STARTED 2025-07-06 20:27:59.854394 | orchestrator | 2025-07-06 20:27:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:02.888105 | orchestrator | 2025-07-06 20:28:02 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:02.888548 | orchestrator | 2025-07-06 20:28:02 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state STARTED 2025-07-06 20:28:02.888580 | orchestrator | 2025-07-06 20:28:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:05.939588 | orchestrator | 2025-07-06 20:28:05 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:05.944655 | orchestrator | 2025-07-06 20:28:05 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state STARTED 2025-07-06 20:28:05.944741 | orchestrator | 2025-07-06 20:28:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:08.978282 | orchestrator | 2025-07-06 20:28:08 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:08.978503 | orchestrator | 2025-07-06 20:28:08 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state STARTED 2025-07-06 20:28:08.978635 | orchestrator | 2025-07-06 20:28:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:12.019735 | orchestrator | 2025-07-06 20:28:12 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:12.020440 | orchestrator | 2025-07-06 20:28:12 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state STARTED 2025-07-06 20:28:12.020457 | orchestrator | 2025-07-06 20:28:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:15.061354 | orchestrator | 2025-07-06 20:28:15 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:15.061790 | orchestrator | 2025-07-06 20:28:15 | INFO  | Task 4c7dcec6-28b4-44a5-85fa-53674d2e331e is in state SUCCESS 2025-07-06 20:28:15.061996 | orchestrator | 2025-07-06 20:28:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:18.116039 | orchestrator | 2025-07-06 20:28:18 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:18.116129 | orchestrator | 2025-07-06 20:28:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:21.165232 | orchestrator | 2025-07-06 20:28:21 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:21.165398 | orchestrator | 2025-07-06 20:28:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:24.221385 | orchestrator | 2025-07-06 20:28:24 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:24.221617 | orchestrator | 2025-07-06 20:28:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:27.273426 | orchestrator | 2025-07-06 20:28:27 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:27.273530 | orchestrator | 2025-07-06 20:28:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:30.307421 | orchestrator | 2025-07-06 20:28:30 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:30.307568 | orchestrator | 2025-07-06 20:28:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:33.357771 | orchestrator | 2025-07-06 20:28:33 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:33.357846 | orchestrator | 2025-07-06 20:28:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:36.404989 | orchestrator | 2025-07-06 20:28:36 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:36.405105 | orchestrator | 2025-07-06 20:28:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:39.461348 | orchestrator | 2025-07-06 20:28:39 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:39.461484 | orchestrator | 2025-07-06 20:28:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:42.502903 | orchestrator | 2025-07-06 20:28:42 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:42.503022 | orchestrator | 2025-07-06 20:28:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:45.539482 | orchestrator | 2025-07-06 20:28:45 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:45.539567 | orchestrator | 2025-07-06 20:28:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:48.584718 | orchestrator | 2025-07-06 20:28:48 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:48.584828 | orchestrator | 2025-07-06 20:28:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:51.629904 | orchestrator | 2025-07-06 20:28:51 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:51.630010 | orchestrator | 2025-07-06 20:28:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:54.676281 | orchestrator | 2025-07-06 20:28:54 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:54.676409 | orchestrator | 2025-07-06 20:28:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:57.723919 | orchestrator | 2025-07-06 20:28:57 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:28:57.724018 | orchestrator | 2025-07-06 20:28:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:00.769185 | orchestrator | 2025-07-06 20:29:00 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:00.769293 | orchestrator | 2025-07-06 20:29:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:03.821781 | orchestrator | 2025-07-06 20:29:03 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:03.821883 | orchestrator | 2025-07-06 20:29:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:06.868702 | orchestrator | 2025-07-06 20:29:06 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:06.868811 | orchestrator | 2025-07-06 20:29:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:09.909094 | orchestrator | 2025-07-06 20:29:09 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:09.909199 | orchestrator | 2025-07-06 20:29:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:12.956647 | orchestrator | 2025-07-06 20:29:12 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:12.956763 | orchestrator | 2025-07-06 20:29:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:16.003443 | orchestrator | 2025-07-06 20:29:16 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:16.003565 | orchestrator | 2025-07-06 20:29:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:19.052861 | orchestrator | 2025-07-06 20:29:19 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:19.052971 | orchestrator | 2025-07-06 20:29:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:22.094601 | orchestrator | 2025-07-06 20:29:22 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:22.094711 | orchestrator | 2025-07-06 20:29:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:25.129891 | orchestrator | 2025-07-06 20:29:25 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:25.129998 | orchestrator | 2025-07-06 20:29:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:28.174200 | orchestrator | 2025-07-06 20:29:28 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:28.174349 | orchestrator | 2025-07-06 20:29:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:31.214950 | orchestrator | 2025-07-06 20:29:31 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:31.215062 | orchestrator | 2025-07-06 20:29:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:34.266993 | orchestrator | 2025-07-06 20:29:34 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:34.267116 | orchestrator | 2025-07-06 20:29:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:37.307083 | orchestrator | 2025-07-06 20:29:37 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:37.307196 | orchestrator | 2025-07-06 20:29:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:40.356943 | orchestrator | 2025-07-06 20:29:40 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:40.357026 | orchestrator | 2025-07-06 20:29:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:43.409532 | orchestrator | 2025-07-06 20:29:43 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:43.409639 | orchestrator | 2025-07-06 20:29:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:46.455350 | orchestrator | 2025-07-06 20:29:46 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:46.455452 | orchestrator | 2025-07-06 20:29:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:49.499538 | orchestrator | 2025-07-06 20:29:49 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:49.499652 | orchestrator | 2025-07-06 20:29:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:52.547875 | orchestrator | 2025-07-06 20:29:52 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:52.547978 | orchestrator | 2025-07-06 20:29:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:55.598351 | orchestrator | 2025-07-06 20:29:55 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:55.598461 | orchestrator | 2025-07-06 20:29:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:58.636812 | orchestrator | 2025-07-06 20:29:58 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:29:58.636913 | orchestrator | 2025-07-06 20:29:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:01.676035 | orchestrator | 2025-07-06 20:30:01 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state STARTED 2025-07-06 20:30:01.676152 | orchestrator | 2025-07-06 20:30:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:04.727706 | orchestrator | 2025-07-06 20:30:04 | INFO  | Task a47ad474-cf0e-4f6d-bfd9-d661dfbc021e is in state SUCCESS 2025-07-06 20:30:04.727954 | orchestrator | 2025-07-06 20:30:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:04.732790 | orchestrator | 2025-07-06 20:30:04.732888 | orchestrator | 2025-07-06 20:30:04.732903 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-06 20:30:04.732916 | orchestrator | 2025-07-06 20:30:04.732927 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-06 20:30:04.733052 | orchestrator | Sunday 06 July 2025 20:21:02 +0000 (0:00:00.073) 0:00:00.073 *********** 2025-07-06 20:30:04.733071 | orchestrator | changed: [localhost] 2025-07-06 20:30:04.733083 | orchestrator | 2025-07-06 20:30:04.733095 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-06 20:30:04.733107 | orchestrator | Sunday 06 July 2025 20:21:03 +0000 (0:00:00.825) 0:00:00.899 *********** 2025-07-06 20:30:04.733119 | orchestrator | 2025-07-06 20:30:04.733146 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733158 | orchestrator | 2025-07-06 20:30:04.733170 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733182 | orchestrator | 2025-07-06 20:30:04.733195 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733207 | orchestrator | 2025-07-06 20:30:04.733219 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733271 | orchestrator | 2025-07-06 20:30:04.733284 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733295 | orchestrator | 2025-07-06 20:30:04.733306 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733317 | orchestrator | 2025-07-06 20:30:04.733329 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:30:04.733341 | orchestrator | changed: [localhost] 2025-07-06 20:30:04.733354 | orchestrator | 2025-07-06 20:30:04.733367 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-06 20:30:04.733380 | orchestrator | Sunday 06 July 2025 20:26:57 +0000 (0:05:53.829) 0:05:54.728 *********** 2025-07-06 20:30:04.733392 | orchestrator | changed: [localhost] 2025-07-06 20:30:04.733405 | orchestrator | 2025-07-06 20:30:04.733418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:30:04.733430 | orchestrator | 2025-07-06 20:30:04.733443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:30:04.733456 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:13.562) 0:06:08.291 *********** 2025-07-06 20:30:04.733468 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.733481 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:30:04.733494 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:30:04.733506 | orchestrator | 2025-07-06 20:30:04.733519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:30:04.733532 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:00.308) 0:06:08.599 *********** 2025-07-06 20:30:04.733546 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-06 20:30:04.733559 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-06 20:30:04.733572 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-06 20:30:04.733585 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-06 20:30:04.733597 | orchestrator | 2025-07-06 20:30:04.733610 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-06 20:30:04.733623 | orchestrator | skipping: no hosts matched 2025-07-06 20:30:04.733636 | orchestrator | 2025-07-06 20:30:04.733649 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:30:04.733662 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:30:04.733678 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:30:04.733692 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:30:04.733703 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:30:04.733713 | orchestrator | 2025-07-06 20:30:04.733734 | orchestrator | 2025-07-06 20:30:04.733745 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:30:04.733756 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:00.413) 0:06:09.012 *********** 2025-07-06 20:30:04.733767 | orchestrator | =============================================================================== 2025-07-06 20:30:04.733778 | orchestrator | Download ironic-agent initramfs --------------------------------------- 353.83s 2025-07-06 20:30:04.733789 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.56s 2025-07-06 20:30:04.733800 | orchestrator | Ensure the destination directory exists --------------------------------- 0.83s 2025-07-06 20:30:04.733810 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-07-06 20:30:04.733821 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-06 20:30:04.733832 | orchestrator | 2025-07-06 20:30:04.733843 | orchestrator | None 2025-07-06 20:30:04.733854 | orchestrator | 2025-07-06 20:30:04.733865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:30:04.733876 | orchestrator | 2025-07-06 20:30:04.733888 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:30:04.733899 | orchestrator | Sunday 06 July 2025 20:25:13 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-07-06 20:30:04.733910 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.733921 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:30:04.733932 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:30:04.733943 | orchestrator | 2025-07-06 20:30:04.733954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:30:04.733965 | orchestrator | Sunday 06 July 2025 20:25:13 +0000 (0:00:00.285) 0:00:00.539 *********** 2025-07-06 20:30:04.733976 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-06 20:30:04.733987 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-06 20:30:04.734062 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-06 20:30:04.734083 | orchestrator | 2025-07-06 20:30:04.734104 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-06 20:30:04.734123 | orchestrator | 2025-07-06 20:30:04.734140 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:30:04.734160 | orchestrator | Sunday 06 July 2025 20:25:13 +0000 (0:00:00.412) 0:00:00.951 *********** 2025-07-06 20:30:04.734180 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:30:04.734201 | orchestrator | 2025-07-06 20:30:04.734255 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-06 20:30:04.734276 | orchestrator | Sunday 06 July 2025 20:25:14 +0000 (0:00:00.520) 0:00:01.472 *********** 2025-07-06 20:30:04.734295 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-06 20:30:04.734314 | orchestrator | 2025-07-06 20:30:04.734332 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-06 20:30:04.734350 | orchestrator | Sunday 06 July 2025 20:25:17 +0000 (0:00:03.320) 0:00:04.793 *********** 2025-07-06 20:30:04.734369 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-06 20:30:04.734389 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-06 20:30:04.734408 | orchestrator | 2025-07-06 20:30:04.734428 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-06 20:30:04.734447 | orchestrator | Sunday 06 July 2025 20:25:24 +0000 (0:00:06.453) 0:00:11.246 *********** 2025-07-06 20:30:04.734466 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:30:04.734486 | orchestrator | 2025-07-06 20:30:04.734506 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-06 20:30:04.734525 | orchestrator | Sunday 06 July 2025 20:25:27 +0000 (0:00:03.235) 0:00:14.482 *********** 2025-07-06 20:30:04.734544 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:30:04.734577 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-06 20:30:04.734596 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-06 20:30:04.734616 | orchestrator | 2025-07-06 20:30:04.734635 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-06 20:30:04.734654 | orchestrator | Sunday 06 July 2025 20:25:35 +0000 (0:00:08.441) 0:00:22.924 *********** 2025-07-06 20:30:04.734674 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:30:04.734693 | orchestrator | 2025-07-06 20:30:04.734713 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-06 20:30:04.734733 | orchestrator | Sunday 06 July 2025 20:25:39 +0000 (0:00:03.791) 0:00:26.715 *********** 2025-07-06 20:30:04.734752 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-06 20:30:04.734772 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-06 20:30:04.734790 | orchestrator | 2025-07-06 20:30:04.734810 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-06 20:30:04.734829 | orchestrator | Sunday 06 July 2025 20:25:47 +0000 (0:00:07.881) 0:00:34.597 *********** 2025-07-06 20:30:04.734848 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-06 20:30:04.734868 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-06 20:30:04.734887 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-06 20:30:04.734905 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-06 20:30:04.734925 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-06 20:30:04.734945 | orchestrator | 2025-07-06 20:30:04.734970 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:30:04.734991 | orchestrator | Sunday 06 July 2025 20:26:02 +0000 (0:00:15.408) 0:00:50.006 *********** 2025-07-06 20:30:04.735010 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:30:04.735030 | orchestrator | 2025-07-06 20:30:04.735050 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-06 20:30:04.735070 | orchestrator | Sunday 06 July 2025 20:26:03 +0000 (0:00:00.587) 0:00:50.594 *********** 2025-07-06 20:30:04.735090 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.735110 | orchestrator | 2025-07-06 20:30:04.735130 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-07-06 20:30:04.735149 | orchestrator | Sunday 06 July 2025 20:26:08 +0000 (0:00:05.089) 0:00:55.683 *********** 2025-07-06 20:30:04.735168 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.735188 | orchestrator | 2025-07-06 20:30:04.735208 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-06 20:30:04.735228 | orchestrator | Sunday 06 July 2025 20:26:12 +0000 (0:00:04.203) 0:00:59.887 *********** 2025-07-06 20:30:04.735271 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.735291 | orchestrator | 2025-07-06 20:30:04.735311 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-07-06 20:30:04.735331 | orchestrator | Sunday 06 July 2025 20:26:15 +0000 (0:00:03.084) 0:01:02.971 *********** 2025-07-06 20:30:04.735351 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-06 20:30:04.735371 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-06 20:30:04.735391 | orchestrator | 2025-07-06 20:30:04.735411 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-07-06 20:30:04.735431 | orchestrator | Sunday 06 July 2025 20:26:25 +0000 (0:00:09.760) 0:01:12.732 *********** 2025-07-06 20:30:04.735451 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-07-06 20:30:04.735486 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-07-06 20:30:04.735518 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-07-06 20:30:04.735538 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-07-06 20:30:04.735558 | orchestrator | 2025-07-06 20:30:04.735586 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-07-06 20:30:04.735606 | orchestrator | Sunday 06 July 2025 20:26:41 +0000 (0:00:16.324) 0:01:29.056 *********** 2025-07-06 20:30:04.735626 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.735646 | orchestrator | 2025-07-06 20:30:04.735665 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-07-06 20:30:04.735683 | orchestrator | Sunday 06 July 2025 20:26:46 +0000 (0:00:04.491) 0:01:33.548 *********** 2025-07-06 20:30:04.735701 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.735720 | orchestrator | 2025-07-06 20:30:04.735738 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-07-06 20:30:04.735756 | orchestrator | Sunday 06 July 2025 20:26:51 +0000 (0:00:05.584) 0:01:39.132 *********** 2025-07-06 20:30:04.735775 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.735793 | orchestrator | 2025-07-06 20:30:04.735812 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-07-06 20:30:04.735831 | orchestrator | Sunday 06 July 2025 20:26:52 +0000 (0:00:00.203) 0:01:39.336 *********** 2025-07-06 20:30:04.735849 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.735868 | orchestrator | 2025-07-06 20:30:04.735886 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:30:04.735905 | orchestrator | Sunday 06 July 2025 20:26:56 +0000 (0:00:04.571) 0:01:43.908 *********** 2025-07-06 20:30:04.735923 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:30:04.735942 | orchestrator | 2025-07-06 20:30:04.735961 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-07-06 20:30:04.735980 | orchestrator | Sunday 06 July 2025 20:26:57 +0000 (0:00:01.000) 0:01:44.909 *********** 2025-07-06 20:30:04.735998 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736016 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736034 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736053 | orchestrator | 2025-07-06 20:30:04.736071 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-07-06 20:30:04.736090 | orchestrator | Sunday 06 July 2025 20:27:03 +0000 (0:00:05.515) 0:01:50.424 *********** 2025-07-06 20:30:04.736110 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736129 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736148 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736166 | orchestrator | 2025-07-06 20:30:04.736183 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-07-06 20:30:04.736195 | orchestrator | Sunday 06 July 2025 20:27:07 +0000 (0:00:04.273) 0:01:54.697 *********** 2025-07-06 20:30:04.736206 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736217 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736228 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736287 | orchestrator | 2025-07-06 20:30:04.736298 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-07-06 20:30:04.736311 | orchestrator | Sunday 06 July 2025 20:27:08 +0000 (0:00:00.728) 0:01:55.426 *********** 2025-07-06 20:30:04.736322 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:30:04.736333 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.736344 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:30:04.736355 | orchestrator | 2025-07-06 20:30:04.736366 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-07-06 20:30:04.736377 | orchestrator | Sunday 06 July 2025 20:27:10 +0000 (0:00:02.124) 0:01:57.550 *********** 2025-07-06 20:30:04.736397 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736408 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736419 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736430 | orchestrator | 2025-07-06 20:30:04.736441 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-07-06 20:30:04.736452 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:01.333) 0:01:58.883 *********** 2025-07-06 20:30:04.736463 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736474 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736485 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736496 | orchestrator | 2025-07-06 20:30:04.736507 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-07-06 20:30:04.736518 | orchestrator | Sunday 06 July 2025 20:27:12 +0000 (0:00:01.291) 0:02:00.175 *********** 2025-07-06 20:30:04.736529 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736540 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736551 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736562 | orchestrator | 2025-07-06 20:30:04.736573 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-07-06 20:30:04.736584 | orchestrator | Sunday 06 July 2025 20:27:15 +0000 (0:00:02.095) 0:02:02.270 *********** 2025-07-06 20:30:04.736595 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.736606 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.736617 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.736628 | orchestrator | 2025-07-06 20:30:04.736639 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-07-06 20:30:04.736650 | orchestrator | Sunday 06 July 2025 20:27:16 +0000 (0:00:01.874) 0:02:04.145 *********** 2025-07-06 20:30:04.736662 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.736673 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:30:04.736684 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:30:04.736694 | orchestrator | 2025-07-06 20:30:04.736706 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-07-06 20:30:04.736752 | orchestrator | Sunday 06 July 2025 20:27:17 +0000 (0:00:00.632) 0:02:04.778 *********** 2025-07-06 20:30:04.736765 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.736790 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:30:04.736802 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:30:04.736813 | orchestrator | 2025-07-06 20:30:04.736824 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:30:04.736836 | orchestrator | Sunday 06 July 2025 20:27:21 +0000 (0:00:03.985) 0:02:08.763 *********** 2025-07-06 20:30:04.736848 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:30:04.736859 | orchestrator | 2025-07-06 20:30:04.736877 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-07-06 20:30:04.736889 | orchestrator | Sunday 06 July 2025 20:27:22 +0000 (0:00:00.684) 0:02:09.448 *********** 2025-07-06 20:30:04.736900 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.736911 | orchestrator | 2025-07-06 20:30:04.736922 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-06 20:30:04.736933 | orchestrator | Sunday 06 July 2025 20:27:25 +0000 (0:00:03.298) 0:02:12.747 *********** 2025-07-06 20:30:04.736945 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.736956 | orchestrator | 2025-07-06 20:30:04.736967 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-07-06 20:30:04.736978 | orchestrator | Sunday 06 July 2025 20:27:28 +0000 (0:00:03.078) 0:02:15.825 *********** 2025-07-06 20:30:04.736989 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-06 20:30:04.737000 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-06 20:30:04.737011 | orchestrator | 2025-07-06 20:30:04.737022 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-07-06 20:30:04.737034 | orchestrator | Sunday 06 July 2025 20:27:35 +0000 (0:00:06.789) 0:02:22.615 *********** 2025-07-06 20:30:04.737051 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.737062 | orchestrator | 2025-07-06 20:30:04.737074 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-07-06 20:30:04.737085 | orchestrator | Sunday 06 July 2025 20:27:38 +0000 (0:00:03.369) 0:02:25.984 *********** 2025-07-06 20:30:04.737096 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:30:04.737107 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:30:04.737118 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:30:04.737129 | orchestrator | 2025-07-06 20:30:04.737140 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-07-06 20:30:04.737151 | orchestrator | Sunday 06 July 2025 20:27:39 +0000 (0:00:00.309) 0:02:26.294 *********** 2025-07-06 20:30:04.737167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.737184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.737205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.737223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.737263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.737275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.737288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.737432 | orchestrator | 2025-07-06 20:30:04.737452 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-07-06 20:30:04.737471 | orchestrator | Sunday 06 July 2025 20:27:41 +0000 (0:00:02.650) 0:02:28.945 *********** 2025-07-06 20:30:04.737492 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.737505 | orchestrator | 2025-07-06 20:30:04.737516 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-07-06 20:30:04.737527 | orchestrator | Sunday 06 July 2025 20:27:42 +0000 (0:00:00.322) 0:02:29.267 *********** 2025-07-06 20:30:04.737538 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.737549 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:30:04.737560 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:30:04.737571 | orchestrator | 2025-07-06 20:30:04.737582 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-07-06 20:30:04.737599 | orchestrator | Sunday 06 July 2025 20:27:42 +0000 (0:00:00.304) 0:02:29.572 *********** 2025-07-06 20:30:04.737624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.737644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.737655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.737667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.737679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.737691 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.737710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.737734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.737747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.737758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.737770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.737782 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:30:04.737794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.738689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.738742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.738755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.738766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.738804 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:30:04.738815 | orchestrator | 2025-07-06 20:30:04.738826 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:30:04.738838 | orchestrator | Sunday 06 July 2025 20:27:42 +0000 (0:00:00.644) 0:02:30.216 *********** 2025-07-06 20:30:04.738849 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:30:04.738859 | orchestrator | 2025-07-06 20:30:04.738869 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-07-06 20:30:04.738880 | orchestrator | Sunday 06 July 2025 20:27:43 +0000 (0:00:00.522) 0:02:30.739 *********** 2025-07-06 20:30:04.738892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.738934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.738960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.738971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.738983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.738993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.739004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739133 | orchestrator | 2025-07-06 20:30:04.739143 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-07-06 20:30:04.739154 | orchestrator | Sunday 06 July 2025 20:27:48 +0000 (0:00:05.339) 0:02:36.078 *********** 2025-07-06 20:30:04.739170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.739181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.739192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.739252 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.739276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.739288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.739299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.739349 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:30:04.739362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.739381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.739403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.739442 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:30:04.739454 | orchestrator | 2025-07-06 20:30:04.739466 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-07-06 20:30:04.739477 | orchestrator | Sunday 06 July 2025 20:27:49 +0000 (0:00:00.673) 0:02:36.752 *********** 2025-07-06 20:30:04.739494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.739505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.739523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.739560 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.739571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.739588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.739599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.739643 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:30:04.739654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:30:04.739665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:30:04.739682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:30:04.739709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:30:04.739721 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:30:04.739731 | orchestrator | 2025-07-06 20:30:04.739741 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-07-06 20:30:04.739753 | orchestrator | Sunday 06 July 2025 20:27:50 +0000 (0:00:00.853) 0:02:37.606 *********** 2025-07-06 20:30:04.739767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.739784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.739812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.739829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.739872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.739909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.739928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.739989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740113 | orchestrator | 2025-07-06 20:30:04.740127 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-07-06 20:30:04.740142 | orchestrator | Sunday 06 July 2025 20:27:55 +0000 (0:00:05.496) 0:02:43.102 *********** 2025-07-06 20:30:04.740157 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-06 20:30:04.740171 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-06 20:30:04.740186 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-06 20:30:04.740200 | orchestrator | 2025-07-06 20:30:04.740214 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-07-06 20:30:04.740229 | orchestrator | Sunday 06 July 2025 20:27:57 +0000 (0:00:01.678) 0:02:44.781 *********** 2025-07-06 20:30:04.740269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.740299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.740315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.740339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.740354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.740369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.740384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.740585 | orchestrator | 2025-07-06 20:30:04.740602 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-07-06 20:30:04.740629 | orchestrator | Sunday 06 July 2025 20:28:14 +0000 (0:00:16.505) 0:03:01.286 *********** 2025-07-06 20:30:04.740647 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.740665 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.740681 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.740698 | orchestrator | 2025-07-06 20:30:04.740710 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-07-06 20:30:04.740720 | orchestrator | Sunday 06 July 2025 20:28:15 +0000 (0:00:01.731) 0:03:03.017 *********** 2025-07-06 20:30:04.740730 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.740740 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.740749 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.740759 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.740768 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.740778 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.740787 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.740797 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.740807 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.740816 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-06 20:30:04.740826 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-06 20:30:04.740835 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-06 20:30:04.740845 | orchestrator | 2025-07-06 20:30:04.740854 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-07-06 20:30:04.740864 | orchestrator | Sunday 06 July 2025 20:28:21 +0000 (0:00:05.539) 0:03:08.557 *********** 2025-07-06 20:30:04.740874 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.740883 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.740893 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.740902 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.740912 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.740922 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.740931 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.740941 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.740950 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.740959 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-06 20:30:04.740969 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-06 20:30:04.740978 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-06 20:30:04.740988 | orchestrator | 2025-07-06 20:30:04.740998 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-07-06 20:30:04.741007 | orchestrator | Sunday 06 July 2025 20:28:26 +0000 (0:00:05.204) 0:03:13.761 *********** 2025-07-06 20:30:04.741017 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.741026 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.741036 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-06 20:30:04.741045 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.741055 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.741065 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-06 20:30:04.741074 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.741091 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.741100 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-06 20:30:04.741110 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-06 20:30:04.741120 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-06 20:30:04.741129 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-06 20:30:04.741139 | orchestrator | 2025-07-06 20:30:04.741149 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-07-06 20:30:04.741158 | orchestrator | Sunday 06 July 2025 20:28:31 +0000 (0:00:05.396) 0:03:19.158 *********** 2025-07-06 20:30:04.741181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.741192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.741203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:30:04.741214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.741287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.741307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-06 20:30:04.741323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:30:04.741434 | orchestrator | 2025-07-06 20:30:04.741444 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:30:04.741454 | orchestrator | Sunday 06 July 2025 20:28:35 +0000 (0:00:03.930) 0:03:23.089 *********** 2025-07-06 20:30:04.741464 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:30:04.741473 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:30:04.741483 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:30:04.741493 | orchestrator | 2025-07-06 20:30:04.741503 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-07-06 20:30:04.741512 | orchestrator | Sunday 06 July 2025 20:28:36 +0000 (0:00:00.310) 0:03:23.399 *********** 2025-07-06 20:30:04.741522 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741532 | orchestrator | 2025-07-06 20:30:04.741541 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-07-06 20:30:04.741551 | orchestrator | Sunday 06 July 2025 20:28:38 +0000 (0:00:01.984) 0:03:25.383 *********** 2025-07-06 20:30:04.741561 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741569 | orchestrator | 2025-07-06 20:30:04.741577 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-07-06 20:30:04.741589 | orchestrator | Sunday 06 July 2025 20:28:40 +0000 (0:00:02.790) 0:03:28.173 *********** 2025-07-06 20:30:04.741598 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741606 | orchestrator | 2025-07-06 20:30:04.741613 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-07-06 20:30:04.741621 | orchestrator | Sunday 06 July 2025 20:28:43 +0000 (0:00:02.233) 0:03:30.407 *********** 2025-07-06 20:30:04.741629 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741637 | orchestrator | 2025-07-06 20:30:04.741645 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-07-06 20:30:04.741653 | orchestrator | Sunday 06 July 2025 20:28:45 +0000 (0:00:02.391) 0:03:32.798 *********** 2025-07-06 20:30:04.741661 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741669 | orchestrator | 2025-07-06 20:30:04.741677 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-06 20:30:04.741685 | orchestrator | Sunday 06 July 2025 20:29:06 +0000 (0:00:20.601) 0:03:53.400 *********** 2025-07-06 20:30:04.741693 | orchestrator | 2025-07-06 20:30:04.741701 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-06 20:30:04.741709 | orchestrator | Sunday 06 July 2025 20:29:06 +0000 (0:00:00.066) 0:03:53.467 *********** 2025-07-06 20:30:04.741717 | orchestrator | 2025-07-06 20:30:04.741725 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-06 20:30:04.741733 | orchestrator | Sunday 06 July 2025 20:29:06 +0000 (0:00:00.063) 0:03:53.531 *********** 2025-07-06 20:30:04.741741 | orchestrator | 2025-07-06 20:30:04.741749 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-07-06 20:30:04.741757 | orchestrator | Sunday 06 July 2025 20:29:06 +0000 (0:00:00.066) 0:03:53.597 *********** 2025-07-06 20:30:04.741765 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741772 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.741780 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.741788 | orchestrator | 2025-07-06 20:30:04.741796 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-07-06 20:30:04.741804 | orchestrator | Sunday 06 July 2025 20:29:22 +0000 (0:00:16.484) 0:04:10.082 *********** 2025-07-06 20:30:04.741812 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.741820 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.741828 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741836 | orchestrator | 2025-07-06 20:30:04.741844 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-07-06 20:30:04.741852 | orchestrator | Sunday 06 July 2025 20:29:30 +0000 (0:00:08.104) 0:04:18.187 *********** 2025-07-06 20:30:04.741860 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741872 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.741880 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.741888 | orchestrator | 2025-07-06 20:30:04.741896 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-07-06 20:30:04.741904 | orchestrator | Sunday 06 July 2025 20:29:41 +0000 (0:00:10.748) 0:04:28.935 *********** 2025-07-06 20:30:04.741912 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741920 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.741928 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.741936 | orchestrator | 2025-07-06 20:30:04.741944 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-07-06 20:30:04.741955 | orchestrator | Sunday 06 July 2025 20:29:52 +0000 (0:00:10.778) 0:04:39.714 *********** 2025-07-06 20:30:04.741964 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:30:04.741972 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:30:04.741980 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:30:04.741988 | orchestrator | 2025-07-06 20:30:04.741996 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:30:04.742005 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:30:04.742046 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:30:04.742056 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:30:04.742064 | orchestrator | 2025-07-06 20:30:04.742072 | orchestrator | 2025-07-06 20:30:04.742080 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:30:04.742088 | orchestrator | Sunday 06 July 2025 20:30:03 +0000 (0:00:10.547) 0:04:50.261 *********** 2025-07-06 20:30:04.742096 | orchestrator | =============================================================================== 2025-07-06 20:30:04.742104 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.60s 2025-07-06 20:30:04.742112 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.51s 2025-07-06 20:30:04.742120 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.48s 2025-07-06 20:30:04.742128 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.32s 2025-07-06 20:30:04.742135 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.41s 2025-07-06 20:30:04.742143 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.78s 2025-07-06 20:30:04.742151 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.75s 2025-07-06 20:30:04.742159 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.55s 2025-07-06 20:30:04.742167 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.76s 2025-07-06 20:30:04.742175 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.44s 2025-07-06 20:30:04.742183 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.10s 2025-07-06 20:30:04.742191 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.88s 2025-07-06 20:30:04.742199 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.79s 2025-07-06 20:30:04.742206 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.45s 2025-07-06 20:30:04.742214 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.58s 2025-07-06 20:30:04.742222 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.54s 2025-07-06 20:30:04.742242 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.52s 2025-07-06 20:30:04.742250 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.50s 2025-07-06 20:30:04.742258 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.40s 2025-07-06 20:30:04.742267 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.34s 2025-07-06 20:30:07.776752 | orchestrator | 2025-07-06 20:30:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:10.821703 | orchestrator | 2025-07-06 20:30:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:13.866063 | orchestrator | 2025-07-06 20:30:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:16.909960 | orchestrator | 2025-07-06 20:30:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:19.951139 | orchestrator | 2025-07-06 20:30:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:22.990381 | orchestrator | 2025-07-06 20:30:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:26.030727 | orchestrator | 2025-07-06 20:30:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:29.069051 | orchestrator | 2025-07-06 20:30:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:32.107856 | orchestrator | 2025-07-06 20:30:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:35.150800 | orchestrator | 2025-07-06 20:30:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:38.187753 | orchestrator | 2025-07-06 20:30:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:41.236527 | orchestrator | 2025-07-06 20:30:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:44.275537 | orchestrator | 2025-07-06 20:30:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:47.314698 | orchestrator | 2025-07-06 20:30:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:50.358218 | orchestrator | 2025-07-06 20:30:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:53.403483 | orchestrator | 2025-07-06 20:30:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:56.446317 | orchestrator | 2025-07-06 20:30:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:30:59.497395 | orchestrator | 2025-07-06 20:30:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:31:02.534642 | orchestrator | 2025-07-06 20:31:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:31:05.577956 | orchestrator | 2025-07-06 20:31:05.834741 | orchestrator | 2025-07-06 20:31:05.837542 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jul 6 20:31:05 UTC 2025 2025-07-06 20:31:05.837620 | orchestrator | 2025-07-06 20:31:06.146228 | orchestrator | ok: Runtime: 0:35:21.370289 2025-07-06 20:31:06.397397 | 2025-07-06 20:31:06.397563 | TASK [Bootstrap services] 2025-07-06 20:31:07.265580 | orchestrator | 2025-07-06 20:31:07.265812 | orchestrator | # BOOTSTRAP 2025-07-06 20:31:07.265839 | orchestrator | 2025-07-06 20:31:07.265854 | orchestrator | + set -e 2025-07-06 20:31:07.265867 | orchestrator | + echo 2025-07-06 20:31:07.265882 | orchestrator | + echo '# BOOTSTRAP' 2025-07-06 20:31:07.265900 | orchestrator | + echo 2025-07-06 20:31:07.265945 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-06 20:31:07.274938 | orchestrator | + set -e 2025-07-06 20:31:07.275192 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-06 20:31:10.908730 | orchestrator | 2025-07-06 20:31:10 | INFO  | It takes a moment until task c23a0fde-011d-4bee-8e0b-cde45b7d932f (flavor-manager) has been started and output is visible here. 2025-07-06 20:31:14.461190 | orchestrator | 2025-07-06 20:31:14 | INFO  | Flavor SCS-1V-4 created 2025-07-06 20:31:14.660894 | orchestrator | 2025-07-06 20:31:14 | INFO  | Flavor SCS-2V-8 created 2025-07-06 20:31:14.844065 | orchestrator | 2025-07-06 20:31:14 | INFO  | Flavor SCS-4V-16 created 2025-07-06 20:31:15.011162 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-8V-32 created 2025-07-06 20:31:15.150268 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-1V-2 created 2025-07-06 20:31:15.272805 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-2V-4 created 2025-07-06 20:31:15.409020 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-4V-8 created 2025-07-06 20:31:15.555584 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-8V-16 created 2025-07-06 20:31:15.698818 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-16V-32 created 2025-07-06 20:31:15.829297 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-1V-8 created 2025-07-06 20:31:15.971327 | orchestrator | 2025-07-06 20:31:15 | INFO  | Flavor SCS-2V-16 created 2025-07-06 20:31:16.103212 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-4V-32 created 2025-07-06 20:31:16.226929 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-1L-1 created 2025-07-06 20:31:16.380048 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-2V-4-20s created 2025-07-06 20:31:16.525547 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-4V-16-100s created 2025-07-06 20:31:16.654292 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-1V-4-10 created 2025-07-06 20:31:16.797384 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-2V-8-20 created 2025-07-06 20:31:16.919298 | orchestrator | 2025-07-06 20:31:16 | INFO  | Flavor SCS-4V-16-50 created 2025-07-06 20:31:17.044805 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-8V-32-100 created 2025-07-06 20:31:17.196150 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-1V-2-5 created 2025-07-06 20:31:17.338906 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-2V-4-10 created 2025-07-06 20:31:17.464990 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-4V-8-20 created 2025-07-06 20:31:17.613836 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-8V-16-50 created 2025-07-06 20:31:17.756322 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-16V-32-100 created 2025-07-06 20:31:17.899162 | orchestrator | 2025-07-06 20:31:17 | INFO  | Flavor SCS-1V-8-20 created 2025-07-06 20:31:18.039572 | orchestrator | 2025-07-06 20:31:18 | INFO  | Flavor SCS-2V-16-50 created 2025-07-06 20:31:18.162374 | orchestrator | 2025-07-06 20:31:18 | INFO  | Flavor SCS-4V-32-100 created 2025-07-06 20:31:18.294963 | orchestrator | 2025-07-06 20:31:18 | INFO  | Flavor SCS-1L-1-5 created 2025-07-06 20:31:20.423398 | orchestrator | 2025-07-06 20:31:20 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-06 20:31:20.429319 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:31:20.429395 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:31:20.429441 | orchestrator | Registering Redlock._release_script 2025-07-06 20:31:20.510289 | orchestrator | 2025-07-06 20:31:20 | INFO  | Task e835f015-729f-4f96-923a-dd051011c656 (bootstrap-basic) was prepared for execution. 2025-07-06 20:31:20.511183 | orchestrator | 2025-07-06 20:31:20 | INFO  | It takes a moment until task e835f015-729f-4f96-923a-dd051011c656 (bootstrap-basic) has been started and output is visible here. 2025-07-06 20:31:24.542467 | orchestrator | 2025-07-06 20:31:24.542576 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-06 20:31:24.543407 | orchestrator | 2025-07-06 20:31:24.543987 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 20:31:24.546134 | orchestrator | Sunday 06 July 2025 20:31:24 +0000 (0:00:00.076) 0:00:00.076 *********** 2025-07-06 20:31:26.320891 | orchestrator | ok: [localhost] 2025-07-06 20:31:26.321007 | orchestrator | 2025-07-06 20:31:26.321455 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-06 20:31:26.321973 | orchestrator | Sunday 06 July 2025 20:31:26 +0000 (0:00:01.779) 0:00:01.855 *********** 2025-07-06 20:31:34.021453 | orchestrator | ok: [localhost] 2025-07-06 20:31:34.021557 | orchestrator | 2025-07-06 20:31:34.021572 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-06 20:31:34.021700 | orchestrator | Sunday 06 July 2025 20:31:34 +0000 (0:00:07.699) 0:00:09.554 *********** 2025-07-06 20:31:41.496004 | orchestrator | changed: [localhost] 2025-07-06 20:31:41.497052 | orchestrator | 2025-07-06 20:31:41.497853 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-06 20:31:41.499894 | orchestrator | Sunday 06 July 2025 20:31:41 +0000 (0:00:07.475) 0:00:17.030 *********** 2025-07-06 20:31:48.239832 | orchestrator | ok: [localhost] 2025-07-06 20:31:48.239973 | orchestrator | 2025-07-06 20:31:48.240950 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-06 20:31:48.241633 | orchestrator | Sunday 06 July 2025 20:31:48 +0000 (0:00:06.743) 0:00:23.773 *********** 2025-07-06 20:31:55.123215 | orchestrator | changed: [localhost] 2025-07-06 20:31:55.123358 | orchestrator | 2025-07-06 20:31:55.123375 | orchestrator | TASK [Create public network] *************************************************** 2025-07-06 20:31:55.123449 | orchestrator | Sunday 06 July 2025 20:31:55 +0000 (0:00:06.884) 0:00:30.657 *********** 2025-07-06 20:32:00.421404 | orchestrator | changed: [localhost] 2025-07-06 20:32:00.421666 | orchestrator | 2025-07-06 20:32:00.422481 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-06 20:32:00.422539 | orchestrator | Sunday 06 July 2025 20:32:00 +0000 (0:00:05.299) 0:00:35.957 *********** 2025-07-06 20:32:06.413211 | orchestrator | changed: [localhost] 2025-07-06 20:32:06.413556 | orchestrator | 2025-07-06 20:32:06.413868 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-06 20:32:06.414740 | orchestrator | Sunday 06 July 2025 20:32:06 +0000 (0:00:05.989) 0:00:41.946 *********** 2025-07-06 20:32:10.719570 | orchestrator | changed: [localhost] 2025-07-06 20:32:10.719771 | orchestrator | 2025-07-06 20:32:10.720470 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-06 20:32:10.722200 | orchestrator | Sunday 06 July 2025 20:32:10 +0000 (0:00:04.307) 0:00:46.253 *********** 2025-07-06 20:32:14.440382 | orchestrator | changed: [localhost] 2025-07-06 20:32:14.441568 | orchestrator | 2025-07-06 20:32:14.441635 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-06 20:32:14.443094 | orchestrator | Sunday 06 July 2025 20:32:14 +0000 (0:00:03.720) 0:00:49.974 *********** 2025-07-06 20:32:17.909846 | orchestrator | ok: [localhost] 2025-07-06 20:32:17.909970 | orchestrator | 2025-07-06 20:32:17.909994 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:32:17.910676 | orchestrator | 2025-07-06 20:32:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 20:32:17.910777 | orchestrator | 2025-07-06 20:32:17 | INFO  | Please wait and do not abort execution. 2025-07-06 20:32:17.911009 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:32:17.911682 | orchestrator | 2025-07-06 20:32:17.911757 | orchestrator | 2025-07-06 20:32:17.913483 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:32:17.914275 | orchestrator | Sunday 06 July 2025 20:32:17 +0000 (0:00:03.468) 0:00:53.442 *********** 2025-07-06 20:32:17.914943 | orchestrator | =============================================================================== 2025-07-06 20:32:17.915705 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.70s 2025-07-06 20:32:17.916174 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.48s 2025-07-06 20:32:17.916556 | orchestrator | Create volume type local ------------------------------------------------ 6.88s 2025-07-06 20:32:17.916896 | orchestrator | Get volume type local --------------------------------------------------- 6.74s 2025-07-06 20:32:17.917350 | orchestrator | Set public network to default ------------------------------------------- 5.99s 2025-07-06 20:32:17.917911 | orchestrator | Create public network --------------------------------------------------- 5.30s 2025-07-06 20:32:17.918130 | orchestrator | Create public subnet ---------------------------------------------------- 4.31s 2025-07-06 20:32:17.918825 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.72s 2025-07-06 20:32:17.919676 | orchestrator | Create manager role ----------------------------------------------------- 3.47s 2025-07-06 20:32:17.920040 | orchestrator | Gathering Facts --------------------------------------------------------- 1.78s 2025-07-06 20:32:20.158108 | orchestrator | 2025-07-06 20:32:20 | INFO  | It takes a moment until task 94f149f3-029a-428b-90cd-23309c5094ef (image-manager) has been started and output is visible here. 2025-07-06 20:32:23.624907 | orchestrator | 2025-07-06 20:32:23 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-06 20:32:23.856436 | orchestrator | 2025-07-06 20:32:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-06 20:32:23.857163 | orchestrator | 2025-07-06 20:32:23 | INFO  | Importing image Cirros 0.6.2 2025-07-06 20:32:23.858193 | orchestrator | 2025-07-06 20:32:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-06 20:32:25.729647 | orchestrator | 2025-07-06 20:32:25 | INFO  | Waiting for image to leave queued state... 2025-07-06 20:32:27.775086 | orchestrator | 2025-07-06 20:32:27 | INFO  | Waiting for import to complete... 2025-07-06 20:32:37.905044 | orchestrator | 2025-07-06 20:32:37 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-06 20:32:38.106679 | orchestrator | 2025-07-06 20:32:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-06 20:32:38.106783 | orchestrator | 2025-07-06 20:32:38 | INFO  | Setting internal_version = 0.6.2 2025-07-06 20:32:38.106798 | orchestrator | 2025-07-06 20:32:38 | INFO  | Setting image_original_user = cirros 2025-07-06 20:32:38.108414 | orchestrator | 2025-07-06 20:32:38 | INFO  | Adding tag os:cirros 2025-07-06 20:32:38.333408 | orchestrator | 2025-07-06 20:32:38 | INFO  | Setting property architecture: x86_64 2025-07-06 20:32:38.624202 | orchestrator | 2025-07-06 20:32:38 | INFO  | Setting property hw_disk_bus: scsi 2025-07-06 20:32:38.883619 | orchestrator | 2025-07-06 20:32:38 | INFO  | Setting property hw_rng_model: virtio 2025-07-06 20:32:39.120038 | orchestrator | 2025-07-06 20:32:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-06 20:32:39.312296 | orchestrator | 2025-07-06 20:32:39 | INFO  | Setting property hw_watchdog_action: reset 2025-07-06 20:32:39.512051 | orchestrator | 2025-07-06 20:32:39 | INFO  | Setting property hypervisor_type: qemu 2025-07-06 20:32:39.722612 | orchestrator | 2025-07-06 20:32:39 | INFO  | Setting property os_distro: cirros 2025-07-06 20:32:39.930871 | orchestrator | 2025-07-06 20:32:39 | INFO  | Setting property replace_frequency: never 2025-07-06 20:32:40.146388 | orchestrator | 2025-07-06 20:32:40 | INFO  | Setting property uuid_validity: none 2025-07-06 20:32:40.358592 | orchestrator | 2025-07-06 20:32:40 | INFO  | Setting property provided_until: none 2025-07-06 20:32:40.550897 | orchestrator | 2025-07-06 20:32:40 | INFO  | Setting property image_description: Cirros 2025-07-06 20:32:40.794732 | orchestrator | 2025-07-06 20:32:40 | INFO  | Setting property image_name: Cirros 2025-07-06 20:32:41.051596 | orchestrator | 2025-07-06 20:32:41 | INFO  | Setting property internal_version: 0.6.2 2025-07-06 20:32:41.271674 | orchestrator | 2025-07-06 20:32:41 | INFO  | Setting property image_original_user: cirros 2025-07-06 20:32:41.501335 | orchestrator | 2025-07-06 20:32:41 | INFO  | Setting property os_version: 0.6.2 2025-07-06 20:32:41.747161 | orchestrator | 2025-07-06 20:32:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-06 20:32:41.959052 | orchestrator | 2025-07-06 20:32:41 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-06 20:32:42.193214 | orchestrator | 2025-07-06 20:32:42 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-06 20:32:42.193990 | orchestrator | 2025-07-06 20:32:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-06 20:32:42.194998 | orchestrator | 2025-07-06 20:32:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-06 20:32:42.435869 | orchestrator | 2025-07-06 20:32:42 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-06 20:32:42.681103 | orchestrator | 2025-07-06 20:32:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-06 20:32:42.681206 | orchestrator | 2025-07-06 20:32:42 | INFO  | Importing image Cirros 0.6.3 2025-07-06 20:32:42.681222 | orchestrator | 2025-07-06 20:32:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-06 20:32:43.026919 | orchestrator | 2025-07-06 20:32:43 | INFO  | Waiting for image to leave queued state... 2025-07-06 20:32:45.070055 | orchestrator | 2025-07-06 20:32:45 | INFO  | Waiting for import to complete... 2025-07-06 20:32:55.162371 | orchestrator | 2025-07-06 20:32:55 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-06 20:32:55.628239 | orchestrator | 2025-07-06 20:32:55 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-06 20:32:55.629004 | orchestrator | 2025-07-06 20:32:55 | INFO  | Setting internal_version = 0.6.3 2025-07-06 20:32:55.630005 | orchestrator | 2025-07-06 20:32:55 | INFO  | Setting image_original_user = cirros 2025-07-06 20:32:55.631096 | orchestrator | 2025-07-06 20:32:55 | INFO  | Adding tag os:cirros 2025-07-06 20:32:55.816469 | orchestrator | 2025-07-06 20:32:55 | INFO  | Setting property architecture: x86_64 2025-07-06 20:32:56.141171 | orchestrator | 2025-07-06 20:32:56 | INFO  | Setting property hw_disk_bus: scsi 2025-07-06 20:32:56.316455 | orchestrator | 2025-07-06 20:32:56 | INFO  | Setting property hw_rng_model: virtio 2025-07-06 20:32:56.563671 | orchestrator | 2025-07-06 20:32:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-06 20:32:56.777889 | orchestrator | 2025-07-06 20:32:56 | INFO  | Setting property hw_watchdog_action: reset 2025-07-06 20:32:56.968704 | orchestrator | 2025-07-06 20:32:56 | INFO  | Setting property hypervisor_type: qemu 2025-07-06 20:32:57.167803 | orchestrator | 2025-07-06 20:32:57 | INFO  | Setting property os_distro: cirros 2025-07-06 20:32:57.393806 | orchestrator | 2025-07-06 20:32:57 | INFO  | Setting property replace_frequency: never 2025-07-06 20:32:57.604484 | orchestrator | 2025-07-06 20:32:57 | INFO  | Setting property uuid_validity: none 2025-07-06 20:32:57.805370 | orchestrator | 2025-07-06 20:32:57 | INFO  | Setting property provided_until: none 2025-07-06 20:32:58.055269 | orchestrator | 2025-07-06 20:32:58 | INFO  | Setting property image_description: Cirros 2025-07-06 20:32:58.238214 | orchestrator | 2025-07-06 20:32:58 | INFO  | Setting property image_name: Cirros 2025-07-06 20:32:58.665229 | orchestrator | 2025-07-06 20:32:58 | INFO  | Setting property internal_version: 0.6.3 2025-07-06 20:32:58.894827 | orchestrator | 2025-07-06 20:32:58 | INFO  | Setting property image_original_user: cirros 2025-07-06 20:32:59.381922 | orchestrator | 2025-07-06 20:32:59 | INFO  | Setting property os_version: 0.6.3 2025-07-06 20:32:59.590984 | orchestrator | 2025-07-06 20:32:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-06 20:32:59.826505 | orchestrator | 2025-07-06 20:32:59 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-06 20:33:00.102800 | orchestrator | 2025-07-06 20:33:00 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-06 20:33:00.103576 | orchestrator | 2025-07-06 20:33:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-06 20:33:00.103843 | orchestrator | 2025-07-06 20:33:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-06 20:33:01.219992 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-06 20:33:03.111263 | orchestrator | 2025-07-06 20:33:03 | INFO  | date: 2025-07-06 2025-07-06 20:33:03.111422 | orchestrator | 2025-07-06 20:33:03 | INFO  | image: octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:33:03.111443 | orchestrator | 2025-07-06 20:33:03 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:33:03.111479 | orchestrator | 2025-07-06 20:33:03 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2.CHECKSUM 2025-07-06 20:33:03.151036 | orchestrator | 2025-07-06 20:33:03 | INFO  | checksum: e7dc90cac0c85815d1d7db62923debdc1ff8dd88fe2a46fd4546115b627650c4 2025-07-06 20:33:03.249031 | orchestrator | 2025-07-06 20:33:03 | INFO  | It takes a moment until task 33c41c2f-ab94-47ad-921a-3a5252dcf7a7 (image-manager) has been started and output is visible here. 2025-07-06 20:33:03.480754 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-06 20:33:03.481259 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-06 20:33:05.649420 | orchestrator | 2025-07-06 20:33:05 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:33:05.670662 | orchestrator | 2025-07-06 20:33:05 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2: 200 2025-07-06 20:33:05.670791 | orchestrator | 2025-07-06 20:33:05 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-06 2025-07-06 20:33:05.671248 | orchestrator | 2025-07-06 20:33:05 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:33:06.055495 | orchestrator | 2025-07-06 20:33:06 | INFO  | Waiting for image to leave queued state... 2025-07-06 20:33:08.092691 | orchestrator | 2025-07-06 20:33:08 | INFO  | Waiting for import to complete... 2025-07-06 20:33:18.373828 | orchestrator | 2025-07-06 20:33:18 | INFO  | Waiting for import to complete... 2025-07-06 20:33:28.475855 | orchestrator | 2025-07-06 20:33:28 | INFO  | Waiting for import to complete... 2025-07-06 20:33:38.560773 | orchestrator | 2025-07-06 20:33:38 | INFO  | Waiting for import to complete... 2025-07-06 20:33:48.665894 | orchestrator | 2025-07-06 20:33:48 | INFO  | Waiting for import to complete... 2025-07-06 20:33:58.797018 | orchestrator | 2025-07-06 20:33:58 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-06' successfully completed, reloading images 2025-07-06 20:33:59.161370 | orchestrator | 2025-07-06 20:33:59 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:33:59.161779 | orchestrator | 2025-07-06 20:33:59 | INFO  | Setting internal_version = 2025-07-06 2025-07-06 20:33:59.162631 | orchestrator | 2025-07-06 20:33:59 | INFO  | Setting image_original_user = ubuntu 2025-07-06 20:33:59.163658 | orchestrator | 2025-07-06 20:33:59 | INFO  | Adding tag amphora 2025-07-06 20:33:59.350631 | orchestrator | 2025-07-06 20:33:59 | INFO  | Adding tag os:ubuntu 2025-07-06 20:33:59.575633 | orchestrator | 2025-07-06 20:33:59 | INFO  | Setting property architecture: x86_64 2025-07-06 20:33:59.839909 | orchestrator | 2025-07-06 20:33:59 | INFO  | Setting property hw_disk_bus: scsi 2025-07-06 20:34:00.044140 | orchestrator | 2025-07-06 20:34:00 | INFO  | Setting property hw_rng_model: virtio 2025-07-06 20:34:00.230008 | orchestrator | 2025-07-06 20:34:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-06 20:34:00.469566 | orchestrator | 2025-07-06 20:34:00 | INFO  | Setting property hw_watchdog_action: reset 2025-07-06 20:34:00.659012 | orchestrator | 2025-07-06 20:34:00 | INFO  | Setting property hypervisor_type: qemu 2025-07-06 20:34:00.875861 | orchestrator | 2025-07-06 20:34:00 | INFO  | Setting property os_distro: ubuntu 2025-07-06 20:34:01.080308 | orchestrator | 2025-07-06 20:34:01 | INFO  | Setting property replace_frequency: quarterly 2025-07-06 20:34:01.299626 | orchestrator | 2025-07-06 20:34:01 | INFO  | Setting property uuid_validity: last-1 2025-07-06 20:34:01.566147 | orchestrator | 2025-07-06 20:34:01 | INFO  | Setting property provided_until: none 2025-07-06 20:34:01.805593 | orchestrator | 2025-07-06 20:34:01 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-06 20:34:02.064254 | orchestrator | 2025-07-06 20:34:02 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-06 20:34:02.285808 | orchestrator | 2025-07-06 20:34:02 | INFO  | Setting property internal_version: 2025-07-06 2025-07-06 20:34:02.505616 | orchestrator | 2025-07-06 20:34:02 | INFO  | Setting property image_original_user: ubuntu 2025-07-06 20:34:02.723036 | orchestrator | 2025-07-06 20:34:02 | INFO  | Setting property os_version: 2025-07-06 2025-07-06 20:34:02.957914 | orchestrator | 2025-07-06 20:34:02 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:34:03.183319 | orchestrator | 2025-07-06 20:34:03 | INFO  | Setting property image_build_date: 2025-07-06 2025-07-06 20:34:03.405456 | orchestrator | 2025-07-06 20:34:03 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:34:03.406117 | orchestrator | 2025-07-06 20:34:03 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:34:03.589984 | orchestrator | 2025-07-06 20:34:03 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-06 20:34:03.591133 | orchestrator | 2025-07-06 20:34:03 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-06 20:34:03.591750 | orchestrator | 2025-07-06 20:34:03 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-06 20:34:03.592796 | orchestrator | 2025-07-06 20:34:03 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-06 20:34:04.170879 | orchestrator | ok: Runtime: 0:02:57.205416 2025-07-06 20:34:04.184259 | 2025-07-06 20:34:04.184381 | TASK [Run checks] 2025-07-06 20:34:04.830591 | orchestrator | + set -e 2025-07-06 20:34:04.830781 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 20:34:04.830807 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 20:34:04.830827 | orchestrator | ++ INTERACTIVE=false 2025-07-06 20:34:04.830841 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 20:34:04.830854 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 20:34:04.830869 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-06 20:34:04.831564 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-06 20:34:04.834166 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 20:34:04.834225 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 20:34:04.834246 | orchestrator | 2025-07-06 20:34:04.834259 | orchestrator | # CHECK 2025-07-06 20:34:04.834271 | orchestrator | 2025-07-06 20:34:04.834282 | orchestrator | + echo 2025-07-06 20:34:04.834303 | orchestrator | + echo '# CHECK' 2025-07-06 20:34:04.834315 | orchestrator | + echo 2025-07-06 20:34:04.834376 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:34:04.835140 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-06 20:34:04.890430 | orchestrator | 2025-07-06 20:34:04.890521 | orchestrator | ## Containers @ testbed-manager 2025-07-06 20:34:04.890533 | orchestrator | 2025-07-06 20:34:04.890546 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-06 20:34:04.890557 | orchestrator | + echo 2025-07-06 20:34:04.890567 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-06 20:34:04.890578 | orchestrator | + echo 2025-07-06 20:34:04.890588 | orchestrator | + osism container testbed-manager ps 2025-07-06 20:34:07.036384 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:34:07.036582 | orchestrator | 9ab898e4786a registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2025-07-06 20:34:07.036623 | orchestrator | c9bf25b3ab15 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2025-07-06 20:34:07.036656 | orchestrator | a2dbf3b064ac registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:34:07.036677 | orchestrator | 367450fbc948 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-07-06 20:34:07.036697 | orchestrator | 613db75a48e4 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2025-07-06 20:34:07.036719 | orchestrator | bec04e62a7b8 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-07-06 20:34:07.036745 | orchestrator | 7e469cfaafeb registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:34:07.036765 | orchestrator | 4b55de0ce3be registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:34:07.036785 | orchestrator | 994bd87b9a55 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-06 20:34:07.036838 | orchestrator | 4ee7addaee93 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-07-06 20:34:07.036858 | orchestrator | 1ed84ccf1387 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-07-06 20:34:07.036871 | orchestrator | 7af49a3b06b6 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-06 20:34:07.036883 | orchestrator | 4f628a4c7465 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 56 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-07-06 20:34:07.036900 | orchestrator | b264c44069e9 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-07-06 20:34:07.036934 | orchestrator | 83e06ff614b1 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-07-06 20:34:07.036947 | orchestrator | 93cd056f046c registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-ansible 2025-07-06 20:34:07.036958 | orchestrator | 80ad36152319 registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-07-06 20:34:07.036969 | orchestrator | a20401ade6d1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-06 20:34:07.036981 | orchestrator | 11e968a2f181 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-07-06 20:34:07.037921 | orchestrator | 3273e1f76b3e registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 56 minutes ago Up 39 minutes (healthy) osismclient 2025-07-06 20:34:07.037956 | orchestrator | 524f142c889c registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-07-06 20:34:07.037977 | orchestrator | 0a68a9337094 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-07-06 20:34:07.037996 | orchestrator | 480441558c95 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-07-06 20:34:07.038066 | orchestrator | 077d91c378d5 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-06 20:34:07.038080 | orchestrator | 52ce06637ecd registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-06 20:34:07.038091 | orchestrator | af0c6563dc90 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-07-06 20:34:07.038103 | orchestrator | bea1441cc3e9 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-06 20:34:07.283907 | orchestrator | 2025-07-06 20:34:07.284020 | orchestrator | ## Images @ testbed-manager 2025-07-06 20:34:07.284037 | orchestrator | 2025-07-06 20:34:07.284049 | orchestrator | + echo 2025-07-06 20:34:07.284060 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-06 20:34:07.284073 | orchestrator | + echo 2025-07-06 20:34:07.284085 | orchestrator | + osism container testbed-manager images 2025-07-06 20:34:09.279307 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:34:09.279466 | orchestrator | registry.osism.tech/osism/homer v25.05.2 24de99a938e3 17 hours ago 11.5MB 2025-07-06 20:34:09.279479 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 beca3f9f79e6 17 hours ago 233MB 2025-07-06 20:34:09.279491 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 4 weeks ago 574MB 2025-07-06 20:34:09.279500 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 5 weeks ago 578MB 2025-07-06 20:34:09.279508 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 weeks ago 319MB 2025-07-06 20:34:09.279516 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 weeks ago 747MB 2025-07-06 20:34:09.279548 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 weeks ago 629MB 2025-07-06 20:34:09.279557 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 5 weeks ago 892MB 2025-07-06 20:34:09.279565 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 5 weeks ago 361MB 2025-07-06 20:34:09.279573 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 weeks ago 411MB 2025-07-06 20:34:09.279581 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 weeks ago 359MB 2025-07-06 20:34:09.279589 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 5 weeks ago 457MB 2025-07-06 20:34:09.279596 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 5 weeks ago 538MB 2025-07-06 20:34:09.279604 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 5 weeks ago 1.21GB 2025-07-06 20:34:09.279612 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 5 weeks ago 308MB 2025-07-06 20:34:09.279640 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 5 weeks ago 297MB 2025-07-06 20:34:09.279650 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 5 weeks ago 41.4MB 2025-07-06 20:34:09.279659 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 5 weeks ago 224MB 2025-07-06 20:34:09.279667 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 weeks ago 453MB 2025-07-06 20:34:09.279676 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-07-06 20:34:09.279684 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-06 20:34:09.279693 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-06 20:34:09.500645 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:34:09.500755 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-06 20:34:09.548222 | orchestrator | 2025-07-06 20:34:09.548320 | orchestrator | ## Containers @ testbed-node-0 2025-07-06 20:34:09.548334 | orchestrator | 2025-07-06 20:34:09.548345 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-06 20:34:09.548356 | orchestrator | + echo 2025-07-06 20:34:09.548369 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-06 20:34:09.548381 | orchestrator | + echo 2025-07-06 20:34:09.548417 | orchestrator | + osism container testbed-node-0 ps 2025-07-06 20:34:11.717223 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:34:11.717347 | orchestrator | fc967dedd3bd registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-06 20:34:11.717368 | orchestrator | fa9338f6d022 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-06 20:34:11.717381 | orchestrator | f830f4565830 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-06 20:34:11.717414 | orchestrator | 3d7aa3b727ac registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-06 20:34:11.717426 | orchestrator | 514c4364bed2 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-06 20:34:11.717437 | orchestrator | 99e87ff4b7a1 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-06 20:34:11.717448 | orchestrator | da39804ff9aa registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-07-06 20:34:11.717459 | orchestrator | a5497f491989 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-07-06 20:34:11.717470 | orchestrator | 37501c7ec18a registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-07-06 20:34:11.717499 | orchestrator | a4b3ed9e8c20 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-07-06 20:34:11.717511 | orchestrator | f8a7e1c923c0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-07-06 20:34:11.717541 | orchestrator | 3f93889aa1b6 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-07-06 20:34:11.717552 | orchestrator | 772bd0194522 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-07-06 20:34:11.717563 | orchestrator | 9cd94d2eac21 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-07-06 20:34:11.717574 | orchestrator | ffe25873136e registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-07-06 20:34:11.717584 | orchestrator | ca84cfd01ca4 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2025-07-06 20:34:11.717595 | orchestrator | ab29c391913b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-07-06 20:34:11.717605 | orchestrator | 707f9a68d4d7 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2025-07-06 20:34:11.717616 | orchestrator | d52850677a13 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-07-06 20:34:11.717648 | orchestrator | 398aa49ca245 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-07-06 20:34:11.717660 | orchestrator | 2f5c9ae627c4 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-07-06 20:34:11.717670 | orchestrator | e41dbbee8841 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-07-06 20:34:11.717681 | orchestrator | 412f123afc27 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-07-06 20:34:11.717691 | orchestrator | 563cd8aa3270 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-07-06 20:34:11.717705 | orchestrator | c08bca50c82d registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-07-06 20:34:11.717716 | orchestrator | 2844b20f7926 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-07-06 20:34:11.717726 | orchestrator | fb02f5af9909 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:34:11.717737 | orchestrator | 7004924b972e registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-07-06 20:34:11.717753 | orchestrator | af66e8b5f2c6 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-07-06 20:34:11.717773 | orchestrator | 8b2ba7138fde registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-07-06 20:34:11.717783 | orchestrator | 7f82380c6aaf registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-07-06 20:34:11.717794 | orchestrator | e7242cfeddbb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-07-06 20:34:11.717809 | orchestrator | a095dfa77bc6 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-06 20:34:11.717820 | orchestrator | 4d0084bb8f55 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-06 20:34:11.717831 | orchestrator | 3e084a183688 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-06 20:34:11.717842 | orchestrator | 4a049c96809f registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 19 minutes (healthy) horizon 2025-07-06 20:34:11.717852 | orchestrator | 195a1b7d1bb3 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-06 20:34:11.717863 | orchestrator | 9277c2bbe9c0 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-06 20:34:11.717873 | orchestrator | fe068c25f24b registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-06 20:34:11.717888 | orchestrator | 3bacabe6df4b registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-06 20:34:11.717907 | orchestrator | 0b1cb213ec7c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-07-06 20:34:11.717919 | orchestrator | d7ff1e6fbfbc registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-06 20:34:11.717929 | orchestrator | 51e7e3c1ec52 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-06 20:34:11.717940 | orchestrator | 922196901a13 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-06 20:34:11.717950 | orchestrator | ba155e255ff2 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-06 20:34:11.717961 | orchestrator | 7098bdfdf455 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-06 20:34:11.717972 | orchestrator | 7ffe8194780d registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-06 20:34:11.717989 | orchestrator | c814e8933a2f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-07-06 20:34:11.718000 | orchestrator | d715f1cff2c2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-07-06 20:34:11.718011 | orchestrator | 279b2164d6a4 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-06 20:34:11.718066 | orchestrator | 1dc40374daeb registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-06 20:34:11.718077 | orchestrator | 66dd0b3c63c4 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-07-06 20:34:11.718088 | orchestrator | 9b0d1bfbbe40 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-06 20:34:11.718099 | orchestrator | 3a154a52014a registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-06 20:34:11.718109 | orchestrator | 80009b38d19f registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:34:11.718120 | orchestrator | 0f965988b4c9 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-06 20:34:11.718131 | orchestrator | 423e4748e263 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-06 20:34:12.016472 | orchestrator | 2025-07-06 20:34:12.016590 | orchestrator | ## Images @ testbed-node-0 2025-07-06 20:34:12.016606 | orchestrator | 2025-07-06 20:34:12.016675 | orchestrator | + echo 2025-07-06 20:34:12.016690 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-06 20:34:12.016702 | orchestrator | + echo 2025-07-06 20:34:12.016714 | orchestrator | + osism container testbed-node-0 images 2025-07-06 20:34:14.117536 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:34:14.117679 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 5 weeks ago 319MB 2025-07-06 20:34:14.117694 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 weeks ago 319MB 2025-07-06 20:34:14.117706 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 5 weeks ago 330MB 2025-07-06 20:34:14.117717 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 5 weeks ago 1.59GB 2025-07-06 20:34:14.117728 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 5 weeks ago 1.55GB 2025-07-06 20:34:14.117740 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 5 weeks ago 419MB 2025-07-06 20:34:14.117750 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 weeks ago 747MB 2025-07-06 20:34:14.117761 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 5 weeks ago 327MB 2025-07-06 20:34:14.117772 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 5 weeks ago 376MB 2025-07-06 20:34:14.117819 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 weeks ago 629MB 2025-07-06 20:34:14.117831 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 5 weeks ago 1.01GB 2025-07-06 20:34:14.117842 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 5 weeks ago 591MB 2025-07-06 20:34:14.117853 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 5 weeks ago 354MB 2025-07-06 20:34:14.117887 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 5 weeks ago 352MB 2025-07-06 20:34:14.117899 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 weeks ago 411MB 2025-07-06 20:34:14.117911 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 5 weeks ago 345MB 2025-07-06 20:34:14.117922 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 weeks ago 359MB 2025-07-06 20:34:14.117933 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 5 weeks ago 326MB 2025-07-06 20:34:14.117944 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 5 weeks ago 325MB 2025-07-06 20:34:14.117955 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 5 weeks ago 1.21GB 2025-07-06 20:34:14.117966 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 5 weeks ago 362MB 2025-07-06 20:34:14.117976 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 5 weeks ago 362MB 2025-07-06 20:34:14.117987 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 5 weeks ago 1.15GB 2025-07-06 20:34:14.117998 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 5 weeks ago 1.04GB 2025-07-06 20:34:14.118009 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 5 weeks ago 1.25GB 2025-07-06 20:34:14.118078 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 5 weeks ago 1.04GB 2025-07-06 20:34:14.118090 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 5 weeks ago 1.04GB 2025-07-06 20:34:14.118102 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 5 weeks ago 1.04GB 2025-07-06 20:34:14.118113 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 5 weeks ago 1.04GB 2025-07-06 20:34:14.118124 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 5 weeks ago 1.2GB 2025-07-06 20:34:14.118136 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 5 weeks ago 1.31GB 2025-07-06 20:34:14.118179 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 5 weeks ago 1.12GB 2025-07-06 20:34:14.118192 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 5 weeks ago 1.12GB 2025-07-06 20:34:14.118203 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 5 weeks ago 1.1GB 2025-07-06 20:34:14.118214 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 5 weeks ago 1.1GB 2025-07-06 20:34:14.118235 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 5 weeks ago 1.1GB 2025-07-06 20:34:14.118246 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 5 weeks ago 1.41GB 2025-07-06 20:34:14.118258 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 5 weeks ago 1.41GB 2025-07-06 20:34:14.118276 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 5 weeks ago 1.06GB 2025-07-06 20:34:14.118287 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 5 weeks ago 1.06GB 2025-07-06 20:34:14.118299 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 5 weeks ago 1.05GB 2025-07-06 20:34:14.118310 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 5 weeks ago 1.05GB 2025-07-06 20:34:14.118321 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 5 weeks ago 1.05GB 2025-07-06 20:34:14.118333 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 5 weeks ago 1.05GB 2025-07-06 20:34:14.118344 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 5 weeks ago 1.04GB 2025-07-06 20:34:14.118355 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 5 weeks ago 1.04GB 2025-07-06 20:34:14.118367 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 5 weeks ago 1.3GB 2025-07-06 20:34:14.118378 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 5 weeks ago 1.29GB 2025-07-06 20:34:14.118389 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 5 weeks ago 1.42GB 2025-07-06 20:34:14.118423 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 5 weeks ago 1.29GB 2025-07-06 20:34:14.118435 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 5 weeks ago 1.06GB 2025-07-06 20:34:14.118446 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 5 weeks ago 1.06GB 2025-07-06 20:34:14.118457 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 5 weeks ago 1.06GB 2025-07-06 20:34:14.118468 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 5 weeks ago 1.11GB 2025-07-06 20:34:14.118479 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 5 weeks ago 1.13GB 2025-07-06 20:34:14.118489 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 5 weeks ago 1.11GB 2025-07-06 20:34:14.118500 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 5 weeks ago 1.11GB 2025-07-06 20:34:14.118511 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 5 weeks ago 1.12GB 2025-07-06 20:34:14.118522 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 5 weeks ago 947MB 2025-07-06 20:34:14.118533 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 5 weeks ago 947MB 2025-07-06 20:34:14.118544 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 5 weeks ago 948MB 2025-07-06 20:34:14.118555 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 5 weeks ago 948MB 2025-07-06 20:34:14.118573 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 weeks ago 1.27GB 2025-07-06 20:34:14.362126 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:34:14.362934 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-06 20:34:14.411962 | orchestrator | 2025-07-06 20:34:14.412120 | orchestrator | ## Containers @ testbed-node-1 2025-07-06 20:34:14.412140 | orchestrator | 2025-07-06 20:34:14.412152 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-06 20:34:14.412163 | orchestrator | + echo 2025-07-06 20:34:14.412175 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-06 20:34:14.412187 | orchestrator | + echo 2025-07-06 20:34:14.412198 | orchestrator | + osism container testbed-node-1 ps 2025-07-06 20:34:16.506883 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:34:16.507053 | orchestrator | b7d0254fb2bc registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-06 20:34:16.507082 | orchestrator | b2fa9b91f464 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-06 20:34:16.507102 | orchestrator | cdfa71f92a25 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-06 20:34:16.507122 | orchestrator | 823418d9e6e6 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-06 20:34:16.507140 | orchestrator | b6aaf653c34b registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-07-06 20:34:16.507158 | orchestrator | b548d5e0ff45 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-06 20:34:16.507176 | orchestrator | 89267a1c4135 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-07-06 20:34:16.507188 | orchestrator | 373aa77f66c6 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-07-06 20:34:16.507199 | orchestrator | 63c81b1c6502 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-07-06 20:34:16.507209 | orchestrator | 749e6929f43c registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-07-06 20:34:16.507248 | orchestrator | 99fa9b03bb33 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-07-06 20:34:16.507267 | orchestrator | daa48674f277 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-07-06 20:34:16.507284 | orchestrator | 9d088d2cf6e5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-07-06 20:34:16.507302 | orchestrator | 59af8820c83c registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-07-06 20:34:16.507351 | orchestrator | 71125f881b05 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-07-06 20:34:16.507371 | orchestrator | eedb035358f3 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-07-06 20:34:16.507384 | orchestrator | 6c4bd6e34035 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_conductor 2025-07-06 20:34:16.507423 | orchestrator | e0efece8888a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-07-06 20:34:16.507438 | orchestrator | 0b231b366683 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-07-06 20:34:16.507472 | orchestrator | fa1e71599754 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-07-06 20:34:16.507510 | orchestrator | 21b71e865313 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-07-06 20:34:16.507584 | orchestrator | 376063e3a327 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-07-06 20:34:16.507597 | orchestrator | f5ded37da622 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-07-06 20:34:16.507610 | orchestrator | c985a7899127 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-07-06 20:34:16.507623 | orchestrator | 211d25cf1775 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-07-06 20:34:16.507644 | orchestrator | 77af87951c52 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-07-06 20:34:16.507663 | orchestrator | 4b640e582913 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-07-06 20:34:16.507682 | orchestrator | 5611fa0193c3 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:34:16.507700 | orchestrator | 46577d201291 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-07-06 20:34:16.507717 | orchestrator | 50a80dd3eec8 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-07-06 20:34:16.507736 | orchestrator | 8543a951926d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-07-06 20:34:16.507753 | orchestrator | e374f70322ad registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-07-06 20:34:16.507788 | orchestrator | 4191b602891b registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-06 20:34:16.507806 | orchestrator | 8d5c82b252be registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-06 20:34:16.507824 | orchestrator | 37d4b6e3787d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-06 20:34:16.507835 | orchestrator | c7770aedc595 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-06 20:34:16.507846 | orchestrator | 2ddb309b19c3 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-06 20:34:16.507857 | orchestrator | 91241627c3ee registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-06 20:34:16.507868 | orchestrator | 5f4245a04468 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-06 20:34:16.507878 | orchestrator | 5337e3ffe23c registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-06 20:34:16.507901 | orchestrator | 3600646ea0f6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-07-06 20:34:16.507912 | orchestrator | 975d36cb09d5 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-06 20:34:16.507923 | orchestrator | eba42dd9236a registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-06 20:34:16.507941 | orchestrator | 8857555a0cdc registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-07-06 20:34:16.507952 | orchestrator | de91d4626cc6 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-07-06 20:34:16.507963 | orchestrator | 5ae767eaa714 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-06 20:34:16.507974 | orchestrator | 0668b408bb45 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-07-06 20:34:16.507985 | orchestrator | 80df63a94c91 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-06 20:34:16.507996 | orchestrator | 225801eb6695 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-07-06 20:34:16.508006 | orchestrator | f5fd123a9445 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-06 20:34:16.508017 | orchestrator | 4e63cd3cbb0d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-06 20:34:16.508035 | orchestrator | 1ee23f91d521 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-06 20:34:16.508045 | orchestrator | 41012e7d9307 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-06 20:34:16.508056 | orchestrator | 4d9d0f39f4e8 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-06 20:34:16.508067 | orchestrator | f19ac8722ed8 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:34:16.508077 | orchestrator | ed8a91976487 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:34:16.508088 | orchestrator | f1b3067978ac registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-06 20:34:16.766468 | orchestrator | 2025-07-06 20:34:16.766568 | orchestrator | ## Images @ testbed-node-1 2025-07-06 20:34:16.766583 | orchestrator | 2025-07-06 20:34:16.766595 | orchestrator | + echo 2025-07-06 20:34:16.766606 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-06 20:34:16.766618 | orchestrator | + echo 2025-07-06 20:34:16.766629 | orchestrator | + osism container testbed-node-1 images 2025-07-06 20:34:18.891205 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:34:18.891304 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 5 weeks ago 319MB 2025-07-06 20:34:18.891317 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 weeks ago 319MB 2025-07-06 20:34:18.891328 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 5 weeks ago 330MB 2025-07-06 20:34:18.891338 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 5 weeks ago 1.59GB 2025-07-06 20:34:18.891348 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 5 weeks ago 1.55GB 2025-07-06 20:34:18.891357 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 5 weeks ago 419MB 2025-07-06 20:34:18.891367 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 weeks ago 747MB 2025-07-06 20:34:18.891377 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 5 weeks ago 327MB 2025-07-06 20:34:18.891387 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 5 weeks ago 376MB 2025-07-06 20:34:18.891396 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 weeks ago 629MB 2025-07-06 20:34:18.891458 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 5 weeks ago 1.01GB 2025-07-06 20:34:18.891468 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 5 weeks ago 591MB 2025-07-06 20:34:18.891478 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 5 weeks ago 354MB 2025-07-06 20:34:18.891488 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 5 weeks ago 352MB 2025-07-06 20:34:18.891518 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 weeks ago 411MB 2025-07-06 20:34:18.891528 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 5 weeks ago 345MB 2025-07-06 20:34:18.891538 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 weeks ago 359MB 2025-07-06 20:34:18.891565 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 5 weeks ago 326MB 2025-07-06 20:34:18.891575 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 5 weeks ago 325MB 2025-07-06 20:34:18.891585 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 5 weeks ago 1.21GB 2025-07-06 20:34:18.891595 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 5 weeks ago 362MB 2025-07-06 20:34:18.891604 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 5 weeks ago 362MB 2025-07-06 20:34:18.891614 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 5 weeks ago 1.15GB 2025-07-06 20:34:18.891624 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 5 weeks ago 1.04GB 2025-07-06 20:34:18.891633 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 5 weeks ago 1.25GB 2025-07-06 20:34:18.891642 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 5 weeks ago 1.2GB 2025-07-06 20:34:18.891652 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 5 weeks ago 1.31GB 2025-07-06 20:34:18.891662 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 5 weeks ago 1.12GB 2025-07-06 20:34:18.891671 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 5 weeks ago 1.12GB 2025-07-06 20:34:18.891688 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 5 weeks ago 1.1GB 2025-07-06 20:34:18.891698 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 5 weeks ago 1.1GB 2025-07-06 20:34:18.891724 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 5 weeks ago 1.1GB 2025-07-06 20:34:18.891750 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 5 weeks ago 1.41GB 2025-07-06 20:34:18.891761 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 5 weeks ago 1.41GB 2025-07-06 20:34:18.891782 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 5 weeks ago 1.06GB 2025-07-06 20:34:18.891794 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 5 weeks ago 1.06GB 2025-07-06 20:34:18.891804 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 5 weeks ago 1.05GB 2025-07-06 20:34:18.891815 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 5 weeks ago 1.05GB 2025-07-06 20:34:18.891826 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 5 weeks ago 1.05GB 2025-07-06 20:34:18.891836 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 5 weeks ago 1.05GB 2025-07-06 20:34:18.891847 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 5 weeks ago 1.3GB 2025-07-06 20:34:18.891866 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 5 weeks ago 1.29GB 2025-07-06 20:34:18.891877 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 5 weeks ago 1.42GB 2025-07-06 20:34:18.891888 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 5 weeks ago 1.29GB 2025-07-06 20:34:18.891900 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 5 weeks ago 1.06GB 2025-07-06 20:34:18.891911 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 5 weeks ago 1.06GB 2025-07-06 20:34:18.891922 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 5 weeks ago 1.06GB 2025-07-06 20:34:18.891932 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 5 weeks ago 1.11GB 2025-07-06 20:34:18.891943 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 5 weeks ago 1.13GB 2025-07-06 20:34:18.891955 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 5 weeks ago 1.11GB 2025-07-06 20:34:18.891965 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 5 weeks ago 947MB 2025-07-06 20:34:18.891977 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 5 weeks ago 948MB 2025-07-06 20:34:18.891988 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 5 weeks ago 947MB 2025-07-06 20:34:18.891999 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 5 weeks ago 948MB 2025-07-06 20:34:18.892010 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 weeks ago 1.27GB 2025-07-06 20:34:19.133245 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:34:19.133959 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-06 20:34:19.189297 | orchestrator | 2025-07-06 20:34:19.189388 | orchestrator | ## Containers @ testbed-node-2 2025-07-06 20:34:19.189435 | orchestrator | 2025-07-06 20:34:19.189456 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-06 20:34:19.189475 | orchestrator | + echo 2025-07-06 20:34:19.189496 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-06 20:34:19.189511 | orchestrator | + echo 2025-07-06 20:34:19.189522 | orchestrator | + osism container testbed-node-2 ps 2025-07-06 20:34:21.505619 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:34:21.505709 | orchestrator | 1ed31f65c3bd registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-06 20:34:21.505720 | orchestrator | 625dcdd6be94 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-06 20:34:21.505744 | orchestrator | 08139daa3a8f registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-06 20:34:21.505752 | orchestrator | 07e4c2b7ca13 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-06 20:34:21.505760 | orchestrator | 274239bac50d registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-06 20:34:21.505785 | orchestrator | 18ca5b7c36d3 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-06 20:34:21.505793 | orchestrator | 8159dbf6eb32 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-07-06 20:34:21.505800 | orchestrator | 73fbee63608e registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-07-06 20:34:21.505808 | orchestrator | 6284da227e0a registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-07-06 20:34:21.505815 | orchestrator | 90fc7b577dd1 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-07-06 20:34:21.505823 | orchestrator | 0385ed2b9ddd registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-07-06 20:34:21.505830 | orchestrator | 712df286c86b registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-07-06 20:34:21.505838 | orchestrator | ae88e0e2180e registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-07-06 20:34:21.505845 | orchestrator | e33f6b9f3595 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-07-06 20:34:21.505853 | orchestrator | 9b33a27f3a7f registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2025-07-06 20:34:21.505860 | orchestrator | 90237ca57d77 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-07-06 20:34:21.505867 | orchestrator | cc11eea200fa registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_conductor 2025-07-06 20:34:21.505875 | orchestrator | ba86387ea925 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-07-06 20:34:21.505882 | orchestrator | 5321391ebb42 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-07-06 20:34:21.505903 | orchestrator | 49c737ef2d6f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-07-06 20:34:21.505911 | orchestrator | ece2304a2876 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-07-06 20:34:21.505918 | orchestrator | d635ffa2d061 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-07-06 20:34:21.505926 | orchestrator | 17ff4bedfc9e registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-07-06 20:34:21.505938 | orchestrator | 8063b7d5c21e registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-07-06 20:34:21.505945 | orchestrator | 74c511f9b7a4 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-07-06 20:34:21.505955 | orchestrator | f33ab2158809 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-07-06 20:34:21.505962 | orchestrator | 8efca383555b registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-07-06 20:34:21.505970 | orchestrator | 322aae94e3b8 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-07-06 20:34:21.505977 | orchestrator | ec77374d270b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-07-06 20:34:21.505985 | orchestrator | f50be337e426 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-07-06 20:34:21.505992 | orchestrator | 8fe7c68acf62 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-07-06 20:34:21.506000 | orchestrator | 7b8a6abf365e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-07-06 20:34:21.506007 | orchestrator | 5861902b56d8 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-06 20:34:21.506057 | orchestrator | 36b133b99c5d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-06 20:34:21.506072 | orchestrator | 06a0565ec3fe registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-06 20:34:21.506079 | orchestrator | 7d9b13a37064 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-06 20:34:21.506086 | orchestrator | d5f5da087365 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-06 20:34:21.506093 | orchestrator | f46791cba872 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-06 20:34:21.506101 | orchestrator | 324d0c4087ba registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-06 20:34:21.506107 | orchestrator | 00e7e6a75f24 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-06 20:34:21.506121 | orchestrator | c18d504c076c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-07-06 20:34:21.506128 | orchestrator | 312e4896287a registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-06 20:34:21.506140 | orchestrator | a43eb2300694 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-06 20:34:21.506147 | orchestrator | 664a2b9a3506 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-07-06 20:34:21.506157 | orchestrator | 149f3f132d67 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-07-06 20:34:21.506164 | orchestrator | 62440ef76361 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-06 20:34:21.506171 | orchestrator | 84a80c8038d0 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-06 20:34:21.506178 | orchestrator | dfe8de9b5585 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-06 20:34:21.506185 | orchestrator | 3d014757d1c7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-07-06 20:34:21.506192 | orchestrator | 18f26427fff4 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-06 20:34:21.506199 | orchestrator | bc18f7ce6a43 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-06 20:34:21.506206 | orchestrator | ae6bf284193a registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-06 20:34:21.506213 | orchestrator | a5db4e3dbe70 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-06 20:34:21.506220 | orchestrator | 9ac30e788b39 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-06 20:34:21.506227 | orchestrator | 899efbe9f33b registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:34:21.506234 | orchestrator | 56921f620533 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:34:21.506241 | orchestrator | 63b68346adca registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-06 20:34:21.756741 | orchestrator | 2025-07-06 20:34:21.756841 | orchestrator | ## Images @ testbed-node-2 2025-07-06 20:34:21.756857 | orchestrator | 2025-07-06 20:34:21.756869 | orchestrator | + echo 2025-07-06 20:34:21.756881 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-06 20:34:21.756893 | orchestrator | + echo 2025-07-06 20:34:21.756905 | orchestrator | + osism container testbed-node-2 images 2025-07-06 20:34:23.851259 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:34:23.851330 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 5 weeks ago 319MB 2025-07-06 20:34:23.851350 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 weeks ago 319MB 2025-07-06 20:34:23.851354 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 5 weeks ago 330MB 2025-07-06 20:34:23.851358 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 5 weeks ago 1.59GB 2025-07-06 20:34:23.851362 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 5 weeks ago 1.55GB 2025-07-06 20:34:23.851366 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 5 weeks ago 419MB 2025-07-06 20:34:23.851369 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 weeks ago 747MB 2025-07-06 20:34:23.851373 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 5 weeks ago 327MB 2025-07-06 20:34:23.851377 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 5 weeks ago 376MB 2025-07-06 20:34:23.851380 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 weeks ago 629MB 2025-07-06 20:34:23.851384 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 5 weeks ago 1.01GB 2025-07-06 20:34:23.851388 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 5 weeks ago 591MB 2025-07-06 20:34:23.851392 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 5 weeks ago 354MB 2025-07-06 20:34:23.851396 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 5 weeks ago 352MB 2025-07-06 20:34:23.851399 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 weeks ago 411MB 2025-07-06 20:34:23.851403 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 5 weeks ago 345MB 2025-07-06 20:34:23.851440 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 weeks ago 359MB 2025-07-06 20:34:23.851444 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 5 weeks ago 326MB 2025-07-06 20:34:23.851448 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 5 weeks ago 325MB 2025-07-06 20:34:23.851452 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 5 weeks ago 1.21GB 2025-07-06 20:34:23.851468 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 5 weeks ago 362MB 2025-07-06 20:34:23.851472 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 5 weeks ago 362MB 2025-07-06 20:34:23.851476 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 5 weeks ago 1.15GB 2025-07-06 20:34:23.851479 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 5 weeks ago 1.04GB 2025-07-06 20:34:23.851483 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 5 weeks ago 1.25GB 2025-07-06 20:34:23.851487 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 5 weeks ago 1.2GB 2025-07-06 20:34:23.851490 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 5 weeks ago 1.31GB 2025-07-06 20:34:23.851494 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 5 weeks ago 1.12GB 2025-07-06 20:34:23.851503 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 5 weeks ago 1.12GB 2025-07-06 20:34:23.851507 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 5 weeks ago 1.1GB 2025-07-06 20:34:23.851511 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 5 weeks ago 1.1GB 2025-07-06 20:34:23.851526 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 5 weeks ago 1.1GB 2025-07-06 20:34:23.851533 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 5 weeks ago 1.41GB 2025-07-06 20:34:23.851538 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 5 weeks ago 1.41GB 2025-07-06 20:34:23.851547 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 5 weeks ago 1.06GB 2025-07-06 20:34:23.851554 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 5 weeks ago 1.06GB 2025-07-06 20:34:23.851560 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 5 weeks ago 1.05GB 2025-07-06 20:34:23.851566 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 5 weeks ago 1.05GB 2025-07-06 20:34:23.851571 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 5 weeks ago 1.05GB 2025-07-06 20:34:23.851577 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 5 weeks ago 1.05GB 2025-07-06 20:34:23.851582 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 5 weeks ago 1.3GB 2025-07-06 20:34:23.851588 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 5 weeks ago 1.29GB 2025-07-06 20:34:23.851593 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 5 weeks ago 1.42GB 2025-07-06 20:34:23.851603 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 5 weeks ago 1.29GB 2025-07-06 20:34:23.851609 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 5 weeks ago 1.06GB 2025-07-06 20:34:23.851614 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 5 weeks ago 1.06GB 2025-07-06 20:34:23.851620 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 5 weeks ago 1.06GB 2025-07-06 20:34:23.851626 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 5 weeks ago 1.11GB 2025-07-06 20:34:23.851632 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 5 weeks ago 1.13GB 2025-07-06 20:34:23.851639 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 5 weeks ago 1.11GB 2025-07-06 20:34:23.851644 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 5 weeks ago 947MB 2025-07-06 20:34:23.851648 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 5 weeks ago 948MB 2025-07-06 20:34:23.851652 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 5 weeks ago 947MB 2025-07-06 20:34:23.851655 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 5 weeks ago 948MB 2025-07-06 20:34:23.851668 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 weeks ago 1.27GB 2025-07-06 20:34:24.112656 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-06 20:34:24.118703 | orchestrator | + set -e 2025-07-06 20:34:24.118762 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 20:34:24.119922 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 20:34:24.119984 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 20:34:24.119997 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 20:34:24.120008 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 20:34:24.120020 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 20:34:24.120031 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 20:34:24.120042 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 20:34:24.120053 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 20:34:24.120064 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 20:34:24.120074 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 20:34:24.120085 | orchestrator | ++ export ARA=false 2025-07-06 20:34:24.120096 | orchestrator | ++ ARA=false 2025-07-06 20:34:24.120107 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 20:34:24.120117 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 20:34:24.120128 | orchestrator | ++ export TEMPEST=false 2025-07-06 20:34:24.120138 | orchestrator | ++ TEMPEST=false 2025-07-06 20:34:24.120149 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 20:34:24.120159 | orchestrator | ++ IS_ZUUL=true 2025-07-06 20:34:24.120170 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 20:34:24.120181 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 20:34:24.120191 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 20:34:24.120207 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 20:34:24.120218 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 20:34:24.120229 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 20:34:24.120240 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 20:34:24.120250 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 20:34:24.120261 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 20:34:24.120271 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 20:34:24.120282 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-06 20:34:24.120293 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-06 20:34:24.130627 | orchestrator | + set -e 2025-07-06 20:34:24.130718 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 20:34:24.130732 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 20:34:24.130743 | orchestrator | ++ INTERACTIVE=false 2025-07-06 20:34:24.131629 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 20:34:24.131710 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 20:34:24.131726 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-06 20:34:24.131756 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-06 20:34:24.137562 | orchestrator | 2025-07-06 20:34:24.137629 | orchestrator | # Ceph status 2025-07-06 20:34:24.137641 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 20:34:24.137651 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 20:34:24.137660 | orchestrator | + echo 2025-07-06 20:34:24.137668 | orchestrator | + echo '# Ceph status' 2025-07-06 20:34:24.137676 | orchestrator | 2025-07-06 20:34:24.137684 | orchestrator | + echo 2025-07-06 20:34:24.137692 | orchestrator | + ceph -s 2025-07-06 20:34:24.699264 | orchestrator | cluster: 2025-07-06 20:34:24.699399 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-06 20:34:24.699450 | orchestrator | health: HEALTH_OK 2025-07-06 20:34:24.699456 | orchestrator | 2025-07-06 20:34:24.699462 | orchestrator | services: 2025-07-06 20:34:24.699468 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-07-06 20:34:24.699475 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-1, testbed-node-0 2025-07-06 20:34:24.699481 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-06 20:34:24.699487 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-07-06 20:34:24.699493 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-06 20:34:24.699498 | orchestrator | 2025-07-06 20:34:24.699503 | orchestrator | data: 2025-07-06 20:34:24.699509 | orchestrator | volumes: 1/1 healthy 2025-07-06 20:34:24.699525 | orchestrator | pools: 14 pools, 401 pgs 2025-07-06 20:34:24.699531 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-06 20:34:24.699536 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-06 20:34:24.699542 | orchestrator | pgs: 401 active+clean 2025-07-06 20:34:24.699566 | orchestrator | 2025-07-06 20:34:24.752350 | orchestrator | 2025-07-06 20:34:24.752485 | orchestrator | # Ceph versions 2025-07-06 20:34:24.752500 | orchestrator | 2025-07-06 20:34:24.752512 | orchestrator | + echo 2025-07-06 20:34:24.752523 | orchestrator | + echo '# Ceph versions' 2025-07-06 20:34:24.752535 | orchestrator | + echo 2025-07-06 20:34:24.752546 | orchestrator | + ceph versions 2025-07-06 20:34:25.355999 | orchestrator | { 2025-07-06 20:34:25.356103 | orchestrator | "mon": { 2025-07-06 20:34:25.356118 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:34:25.356131 | orchestrator | }, 2025-07-06 20:34:25.356142 | orchestrator | "mgr": { 2025-07-06 20:34:25.356153 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:34:25.356163 | orchestrator | }, 2025-07-06 20:34:25.356174 | orchestrator | "osd": { 2025-07-06 20:34:25.356185 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-06 20:34:25.356196 | orchestrator | }, 2025-07-06 20:34:25.356206 | orchestrator | "mds": { 2025-07-06 20:34:25.356217 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:34:25.356227 | orchestrator | }, 2025-07-06 20:34:25.356238 | orchestrator | "rgw": { 2025-07-06 20:34:25.356249 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:34:25.356260 | orchestrator | }, 2025-07-06 20:34:25.356270 | orchestrator | "overall": { 2025-07-06 20:34:25.356282 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-06 20:34:25.356293 | orchestrator | } 2025-07-06 20:34:25.356303 | orchestrator | } 2025-07-06 20:34:25.409029 | orchestrator | 2025-07-06 20:34:25.409123 | orchestrator | # Ceph OSD tree 2025-07-06 20:34:25.409138 | orchestrator | 2025-07-06 20:34:25.409150 | orchestrator | + echo 2025-07-06 20:34:25.409161 | orchestrator | + echo '# Ceph OSD tree' 2025-07-06 20:34:25.409173 | orchestrator | + echo 2025-07-06 20:34:25.409184 | orchestrator | + ceph osd df tree 2025-07-06 20:34:25.940349 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-06 20:34:25.940501 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-06 20:34:25.940515 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-06 20:34:25.940524 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 70 MiB 19 GiB 5.01 0.85 209 up osd.1 2025-07-06 20:34:25.940532 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.82 1.15 181 up osd.3 2025-07-06 20:34:25.940542 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-06 20:34:25.940570 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.05 1.02 186 up osd.0 2025-07-06 20:34:25.940585 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.79 0.98 202 up osd.4 2025-07-06 20:34:25.940598 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-06 20:34:25.940610 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 18 GiB 7.41 1.25 206 up osd.2 2025-07-06 20:34:25.940623 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 904 MiB 835 MiB 1 KiB 70 MiB 19 GiB 4.42 0.75 186 up osd.5 2025-07-06 20:34:25.940635 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-06 20:34:25.940648 | orchestrator | MIN/MAX VAR: 0.75/1.25 STDDEV: 1.01 2025-07-06 20:34:25.986281 | orchestrator | 2025-07-06 20:34:25.986378 | orchestrator | # Ceph monitor status 2025-07-06 20:34:25.986397 | orchestrator | 2025-07-06 20:34:25.986434 | orchestrator | + echo 2025-07-06 20:34:25.986450 | orchestrator | + echo '# Ceph monitor status' 2025-07-06 20:34:25.986494 | orchestrator | + echo 2025-07-06 20:34:25.986505 | orchestrator | + ceph mon stat 2025-07-06 20:34:26.602100 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-06 20:34:26.644071 | orchestrator | 2025-07-06 20:34:26.644162 | orchestrator | # Ceph quorum status 2025-07-06 20:34:26.644176 | orchestrator | 2025-07-06 20:34:26.644187 | orchestrator | + echo 2025-07-06 20:34:26.644197 | orchestrator | + echo '# Ceph quorum status' 2025-07-06 20:34:26.644207 | orchestrator | + echo 2025-07-06 20:34:26.644571 | orchestrator | + ceph quorum_status 2025-07-06 20:34:26.644593 | orchestrator | + jq 2025-07-06 20:34:27.276827 | orchestrator | { 2025-07-06 20:34:27.276927 | orchestrator | "election_epoch": 4, 2025-07-06 20:34:27.276942 | orchestrator | "quorum": [ 2025-07-06 20:34:27.276955 | orchestrator | 0, 2025-07-06 20:34:27.276967 | orchestrator | 1, 2025-07-06 20:34:27.276978 | orchestrator | 2 2025-07-06 20:34:27.276988 | orchestrator | ], 2025-07-06 20:34:27.276999 | orchestrator | "quorum_names": [ 2025-07-06 20:34:27.277010 | orchestrator | "testbed-node-0", 2025-07-06 20:34:27.277021 | orchestrator | "testbed-node-1", 2025-07-06 20:34:27.277031 | orchestrator | "testbed-node-2" 2025-07-06 20:34:27.277042 | orchestrator | ], 2025-07-06 20:34:27.277053 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-06 20:34:27.277065 | orchestrator | "quorum_age": 1732, 2025-07-06 20:34:27.277076 | orchestrator | "features": { 2025-07-06 20:34:27.277087 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-06 20:34:27.277097 | orchestrator | "quorum_mon": [ 2025-07-06 20:34:27.277108 | orchestrator | "kraken", 2025-07-06 20:34:27.277118 | orchestrator | "luminous", 2025-07-06 20:34:27.277129 | orchestrator | "mimic", 2025-07-06 20:34:27.277140 | orchestrator | "osdmap-prune", 2025-07-06 20:34:27.277150 | orchestrator | "nautilus", 2025-07-06 20:34:27.277161 | orchestrator | "octopus", 2025-07-06 20:34:27.277172 | orchestrator | "pacific", 2025-07-06 20:34:27.277182 | orchestrator | "elector-pinging", 2025-07-06 20:34:27.277193 | orchestrator | "quincy", 2025-07-06 20:34:27.277203 | orchestrator | "reef" 2025-07-06 20:34:27.277214 | orchestrator | ] 2025-07-06 20:34:27.277225 | orchestrator | }, 2025-07-06 20:34:27.277236 | orchestrator | "monmap": { 2025-07-06 20:34:27.277246 | orchestrator | "epoch": 1, 2025-07-06 20:34:27.277257 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-06 20:34:27.277269 | orchestrator | "modified": "2025-07-06T20:05:22.069430Z", 2025-07-06 20:34:27.277280 | orchestrator | "created": "2025-07-06T20:05:22.069430Z", 2025-07-06 20:34:27.277291 | orchestrator | "min_mon_release": 18, 2025-07-06 20:34:27.277301 | orchestrator | "min_mon_release_name": "reef", 2025-07-06 20:34:27.277312 | orchestrator | "election_strategy": 1, 2025-07-06 20:34:27.277322 | orchestrator | "disallowed_leaders: ": "", 2025-07-06 20:34:27.277333 | orchestrator | "stretch_mode": false, 2025-07-06 20:34:27.277344 | orchestrator | "tiebreaker_mon": "", 2025-07-06 20:34:27.277356 | orchestrator | "removed_ranks: ": "", 2025-07-06 20:34:27.277368 | orchestrator | "features": { 2025-07-06 20:34:27.277380 | orchestrator | "persistent": [ 2025-07-06 20:34:27.277392 | orchestrator | "kraken", 2025-07-06 20:34:27.277404 | orchestrator | "luminous", 2025-07-06 20:34:27.277443 | orchestrator | "mimic", 2025-07-06 20:34:27.277455 | orchestrator | "osdmap-prune", 2025-07-06 20:34:27.277467 | orchestrator | "nautilus", 2025-07-06 20:34:27.277478 | orchestrator | "octopus", 2025-07-06 20:34:27.277490 | orchestrator | "pacific", 2025-07-06 20:34:27.277501 | orchestrator | "elector-pinging", 2025-07-06 20:34:27.277511 | orchestrator | "quincy", 2025-07-06 20:34:27.277522 | orchestrator | "reef" 2025-07-06 20:34:27.277533 | orchestrator | ], 2025-07-06 20:34:27.277543 | orchestrator | "optional": [] 2025-07-06 20:34:27.277554 | orchestrator | }, 2025-07-06 20:34:27.277565 | orchestrator | "mons": [ 2025-07-06 20:34:27.277575 | orchestrator | { 2025-07-06 20:34:27.277586 | orchestrator | "rank": 0, 2025-07-06 20:34:27.277597 | orchestrator | "name": "testbed-node-0", 2025-07-06 20:34:27.277607 | orchestrator | "public_addrs": { 2025-07-06 20:34:27.277618 | orchestrator | "addrvec": [ 2025-07-06 20:34:27.277629 | orchestrator | { 2025-07-06 20:34:27.277640 | orchestrator | "type": "v2", 2025-07-06 20:34:27.277651 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-06 20:34:27.277662 | orchestrator | "nonce": 0 2025-07-06 20:34:27.277673 | orchestrator | }, 2025-07-06 20:34:27.277759 | orchestrator | { 2025-07-06 20:34:27.277773 | orchestrator | "type": "v1", 2025-07-06 20:34:27.277784 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-06 20:34:27.277794 | orchestrator | "nonce": 0 2025-07-06 20:34:27.277805 | orchestrator | } 2025-07-06 20:34:27.277816 | orchestrator | ] 2025-07-06 20:34:27.277827 | orchestrator | }, 2025-07-06 20:34:27.277837 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-06 20:34:27.277848 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-06 20:34:27.277859 | orchestrator | "priority": 0, 2025-07-06 20:34:27.277870 | orchestrator | "weight": 0, 2025-07-06 20:34:27.277880 | orchestrator | "crush_location": "{}" 2025-07-06 20:34:27.277891 | orchestrator | }, 2025-07-06 20:34:27.277902 | orchestrator | { 2025-07-06 20:34:27.277912 | orchestrator | "rank": 1, 2025-07-06 20:34:27.277923 | orchestrator | "name": "testbed-node-1", 2025-07-06 20:34:27.277949 | orchestrator | "public_addrs": { 2025-07-06 20:34:27.277960 | orchestrator | "addrvec": [ 2025-07-06 20:34:27.277971 | orchestrator | { 2025-07-06 20:34:27.278082 | orchestrator | "type": "v2", 2025-07-06 20:34:27.278097 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-06 20:34:27.278108 | orchestrator | "nonce": 0 2025-07-06 20:34:27.278118 | orchestrator | }, 2025-07-06 20:34:27.278129 | orchestrator | { 2025-07-06 20:34:27.278140 | orchestrator | "type": "v1", 2025-07-06 20:34:27.278150 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-06 20:34:27.278162 | orchestrator | "nonce": 0 2025-07-06 20:34:27.278180 | orchestrator | } 2025-07-06 20:34:27.278198 | orchestrator | ] 2025-07-06 20:34:27.278215 | orchestrator | }, 2025-07-06 20:34:27.278232 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-06 20:34:27.278250 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-06 20:34:27.278267 | orchestrator | "priority": 0, 2025-07-06 20:34:27.278284 | orchestrator | "weight": 0, 2025-07-06 20:34:27.278302 | orchestrator | "crush_location": "{}" 2025-07-06 20:34:27.278320 | orchestrator | }, 2025-07-06 20:34:27.278339 | orchestrator | { 2025-07-06 20:34:27.278356 | orchestrator | "rank": 2, 2025-07-06 20:34:27.278373 | orchestrator | "name": "testbed-node-2", 2025-07-06 20:34:27.278391 | orchestrator | "public_addrs": { 2025-07-06 20:34:27.278450 | orchestrator | "addrvec": [ 2025-07-06 20:34:27.278473 | orchestrator | { 2025-07-06 20:34:27.278487 | orchestrator | "type": "v2", 2025-07-06 20:34:27.278498 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-06 20:34:27.278508 | orchestrator | "nonce": 0 2025-07-06 20:34:27.278519 | orchestrator | }, 2025-07-06 20:34:27.278529 | orchestrator | { 2025-07-06 20:34:27.278540 | orchestrator | "type": "v1", 2025-07-06 20:34:27.278551 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-06 20:34:27.278561 | orchestrator | "nonce": 0 2025-07-06 20:34:27.278572 | orchestrator | } 2025-07-06 20:34:27.278582 | orchestrator | ] 2025-07-06 20:34:27.278593 | orchestrator | }, 2025-07-06 20:34:27.278603 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-06 20:34:27.278614 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-06 20:34:27.278624 | orchestrator | "priority": 0, 2025-07-06 20:34:27.278635 | orchestrator | "weight": 0, 2025-07-06 20:34:27.278646 | orchestrator | "crush_location": "{}" 2025-07-06 20:34:27.278656 | orchestrator | } 2025-07-06 20:34:27.278667 | orchestrator | ] 2025-07-06 20:34:27.278677 | orchestrator | } 2025-07-06 20:34:27.278688 | orchestrator | } 2025-07-06 20:34:27.278698 | orchestrator | 2025-07-06 20:34:27.278709 | orchestrator | # Ceph free space status 2025-07-06 20:34:27.278720 | orchestrator | 2025-07-06 20:34:27.278731 | orchestrator | + echo 2025-07-06 20:34:27.278742 | orchestrator | + echo '# Ceph free space status' 2025-07-06 20:34:27.278752 | orchestrator | + echo 2025-07-06 20:34:27.278763 | orchestrator | + ceph df 2025-07-06 20:34:27.862156 | orchestrator | --- RAW STORAGE --- 2025-07-06 20:34:27.862277 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-06 20:34:27.862314 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-06 20:34:27.862332 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-06 20:34:27.862349 | orchestrator | 2025-07-06 20:34:27.862367 | orchestrator | --- POOLS --- 2025-07-06 20:34:27.862384 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-06 20:34:27.862402 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-06 20:34:27.862515 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:34:27.862527 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-06 20:34:27.862537 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:34:27.862548 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:34:27.862558 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-06 20:34:27.862567 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-06 20:34:27.862577 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:34:27.862586 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-07-06 20:34:27.862596 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:34:27.862606 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:34:27.862615 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-07-06 20:34:27.862626 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:34:27.862637 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:34:27.910346 | orchestrator | ++ semver 9.1.0 5.0.0 2025-07-06 20:34:27.964172 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-06 20:34:27.964285 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-06 20:34:27.964302 | orchestrator | + osism apply facts 2025-07-06 20:34:29.603360 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:34:29.604298 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:34:29.604333 | orchestrator | Registering Redlock._release_script 2025-07-06 20:34:29.672652 | orchestrator | 2025-07-06 20:34:29 | INFO  | Task b669ae24-5e0c-4b12-b512-4357283d89cb (facts) was prepared for execution. 2025-07-06 20:34:29.672742 | orchestrator | 2025-07-06 20:34:29 | INFO  | It takes a moment until task b669ae24-5e0c-4b12-b512-4357283d89cb (facts) has been started and output is visible here. 2025-07-06 20:34:33.790491 | orchestrator | 2025-07-06 20:34:33.791493 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-06 20:34:33.795561 | orchestrator | 2025-07-06 20:34:33.796676 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 20:34:33.797327 | orchestrator | Sunday 06 July 2025 20:34:33 +0000 (0:00:00.282) 0:00:00.282 *********** 2025-07-06 20:34:35.318607 | orchestrator | ok: [testbed-manager] 2025-07-06 20:34:35.324352 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:35.324405 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:34:35.324411 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:34:35.326636 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:34:35.329318 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:34:35.329327 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:34:35.329331 | orchestrator | 2025-07-06 20:34:35.329778 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 20:34:35.330277 | orchestrator | Sunday 06 July 2025 20:34:35 +0000 (0:00:01.521) 0:00:01.804 *********** 2025-07-06 20:34:35.512242 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:34:35.622971 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:35.721827 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:35.803959 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:35.897383 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:36.645787 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:36.646929 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:36.647578 | orchestrator | 2025-07-06 20:34:36.651375 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 20:34:36.651550 | orchestrator | 2025-07-06 20:34:36.651578 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 20:34:36.653073 | orchestrator | Sunday 06 July 2025 20:34:36 +0000 (0:00:01.335) 0:00:03.140 *********** 2025-07-06 20:34:42.092609 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:42.092941 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:34:42.094473 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:34:42.095015 | orchestrator | ok: [testbed-manager] 2025-07-06 20:34:42.095708 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:34:42.097007 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:34:42.097052 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:34:42.097462 | orchestrator | 2025-07-06 20:34:42.097996 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 20:34:42.098802 | orchestrator | 2025-07-06 20:34:42.099182 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 20:34:42.099910 | orchestrator | Sunday 06 July 2025 20:34:42 +0000 (0:00:05.447) 0:00:08.587 *********** 2025-07-06 20:34:42.265315 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:34:42.347113 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:42.430646 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:42.512260 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:42.593899 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:42.634365 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:42.634792 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:42.636502 | orchestrator | 2025-07-06 20:34:42.636591 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:34:42.636609 | orchestrator | 2025-07-06 20:34:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 20:34:42.636924 | orchestrator | 2025-07-06 20:34:42 | INFO  | Please wait and do not abort execution. 2025-07-06 20:34:42.637683 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.638673 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.639585 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.639783 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.640787 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.641137 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.641545 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:34:42.641928 | orchestrator | 2025-07-06 20:34:42.642312 | orchestrator | 2025-07-06 20:34:42.642789 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:34:42.643498 | orchestrator | Sunday 06 July 2025 20:34:42 +0000 (0:00:00.542) 0:00:09.130 *********** 2025-07-06 20:34:42.646244 | orchestrator | =============================================================================== 2025-07-06 20:34:42.649195 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.45s 2025-07-06 20:34:42.650113 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.52s 2025-07-06 20:34:42.650895 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2025-07-06 20:34:42.651713 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-07-06 20:34:43.404575 | orchestrator | + osism validate ceph-mons 2025-07-06 20:34:45.088955 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:34:45.089076 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:34:45.089098 | orchestrator | Registering Redlock._release_script 2025-07-06 20:35:05.047847 | orchestrator | 2025-07-06 20:35:05.047959 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-06 20:35:05.047977 | orchestrator | 2025-07-06 20:35:05.047989 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-06 20:35:05.048001 | orchestrator | Sunday 06 July 2025 20:34:49 +0000 (0:00:00.442) 0:00:00.442 *********** 2025-07-06 20:35:05.048012 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:05.048023 | orchestrator | 2025-07-06 20:35:05.048034 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-06 20:35:05.048045 | orchestrator | Sunday 06 July 2025 20:34:50 +0000 (0:00:00.721) 0:00:01.164 *********** 2025-07-06 20:35:05.048057 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:05.048068 | orchestrator | 2025-07-06 20:35:05.048079 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-06 20:35:05.048090 | orchestrator | Sunday 06 July 2025 20:34:51 +0000 (0:00:00.908) 0:00:02.073 *********** 2025-07-06 20:35:05.048101 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.048113 | orchestrator | 2025-07-06 20:35:05.048125 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-06 20:35:05.048136 | orchestrator | Sunday 06 July 2025 20:34:51 +0000 (0:00:00.232) 0:00:02.305 *********** 2025-07-06 20:35:05.048147 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.048158 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:05.048169 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:05.048180 | orchestrator | 2025-07-06 20:35:05.048191 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-06 20:35:05.048202 | orchestrator | Sunday 06 July 2025 20:34:51 +0000 (0:00:00.293) 0:00:02.599 *********** 2025-07-06 20:35:05.048229 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.048242 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:05.048253 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:05.048264 | orchestrator | 2025-07-06 20:35:05.048275 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-06 20:35:05.048286 | orchestrator | Sunday 06 July 2025 20:34:52 +0000 (0:00:00.973) 0:00:03.572 *********** 2025-07-06 20:35:05.048297 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048309 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:35:05.048320 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:35:05.048331 | orchestrator | 2025-07-06 20:35:05.048342 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-06 20:35:05.048353 | orchestrator | Sunday 06 July 2025 20:34:52 +0000 (0:00:00.291) 0:00:03.864 *********** 2025-07-06 20:35:05.048364 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.048376 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:05.048389 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:05.048401 | orchestrator | 2025-07-06 20:35:05.048415 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:35:05.048429 | orchestrator | Sunday 06 July 2025 20:34:53 +0000 (0:00:00.538) 0:00:04.402 *********** 2025-07-06 20:35:05.048466 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.048478 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:05.048489 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:05.048500 | orchestrator | 2025-07-06 20:35:05.048512 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-06 20:35:05.048523 | orchestrator | Sunday 06 July 2025 20:34:53 +0000 (0:00:00.322) 0:00:04.725 *********** 2025-07-06 20:35:05.048534 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048545 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:35:05.048556 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:35:05.048567 | orchestrator | 2025-07-06 20:35:05.048578 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-06 20:35:05.048589 | orchestrator | Sunday 06 July 2025 20:34:54 +0000 (0:00:00.302) 0:00:05.027 *********** 2025-07-06 20:35:05.048600 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.048634 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:05.048646 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:05.048657 | orchestrator | 2025-07-06 20:35:05.048667 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:35:05.048678 | orchestrator | Sunday 06 July 2025 20:34:54 +0000 (0:00:00.332) 0:00:05.360 *********** 2025-07-06 20:35:05.048689 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048700 | orchestrator | 2025-07-06 20:35:05.048710 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:35:05.048721 | orchestrator | Sunday 06 July 2025 20:34:55 +0000 (0:00:00.729) 0:00:06.090 *********** 2025-07-06 20:35:05.048732 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048742 | orchestrator | 2025-07-06 20:35:05.048753 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:35:05.048764 | orchestrator | Sunday 06 July 2025 20:34:55 +0000 (0:00:00.333) 0:00:06.423 *********** 2025-07-06 20:35:05.048774 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048785 | orchestrator | 2025-07-06 20:35:05.048796 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:05.048806 | orchestrator | Sunday 06 July 2025 20:34:55 +0000 (0:00:00.276) 0:00:06.700 *********** 2025-07-06 20:35:05.048817 | orchestrator | 2025-07-06 20:35:05.048828 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:05.048838 | orchestrator | Sunday 06 July 2025 20:34:55 +0000 (0:00:00.079) 0:00:06.780 *********** 2025-07-06 20:35:05.048849 | orchestrator | 2025-07-06 20:35:05.048859 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:05.048870 | orchestrator | Sunday 06 July 2025 20:34:55 +0000 (0:00:00.077) 0:00:06.857 *********** 2025-07-06 20:35:05.048881 | orchestrator | 2025-07-06 20:35:05.048891 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:35:05.048903 | orchestrator | Sunday 06 July 2025 20:34:55 +0000 (0:00:00.075) 0:00:06.932 *********** 2025-07-06 20:35:05.048913 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048924 | orchestrator | 2025-07-06 20:35:05.048935 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-06 20:35:05.048946 | orchestrator | Sunday 06 July 2025 20:34:56 +0000 (0:00:00.270) 0:00:07.203 *********** 2025-07-06 20:35:05.048957 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.048968 | orchestrator | 2025-07-06 20:35:05.048995 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-06 20:35:05.049007 | orchestrator | Sunday 06 July 2025 20:34:56 +0000 (0:00:00.287) 0:00:07.490 *********** 2025-07-06 20:35:05.049018 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049029 | orchestrator | 2025-07-06 20:35:05.049039 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-06 20:35:05.049050 | orchestrator | Sunday 06 July 2025 20:34:56 +0000 (0:00:00.113) 0:00:07.604 *********** 2025-07-06 20:35:05.049061 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:35:05.049071 | orchestrator | 2025-07-06 20:35:05.049082 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-06 20:35:05.049093 | orchestrator | Sunday 06 July 2025 20:34:58 +0000 (0:00:01.577) 0:00:09.182 *********** 2025-07-06 20:35:05.049103 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049114 | orchestrator | 2025-07-06 20:35:05.049124 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-06 20:35:05.049135 | orchestrator | Sunday 06 July 2025 20:34:58 +0000 (0:00:00.304) 0:00:09.487 *********** 2025-07-06 20:35:05.049146 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.049156 | orchestrator | 2025-07-06 20:35:05.049167 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-06 20:35:05.049178 | orchestrator | Sunday 06 July 2025 20:34:58 +0000 (0:00:00.308) 0:00:09.795 *********** 2025-07-06 20:35:05.049188 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049199 | orchestrator | 2025-07-06 20:35:05.049210 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-06 20:35:05.049267 | orchestrator | Sunday 06 July 2025 20:34:59 +0000 (0:00:00.325) 0:00:10.120 *********** 2025-07-06 20:35:05.049279 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049290 | orchestrator | 2025-07-06 20:35:05.049301 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-06 20:35:05.049311 | orchestrator | Sunday 06 July 2025 20:34:59 +0000 (0:00:00.293) 0:00:10.414 *********** 2025-07-06 20:35:05.049322 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.049333 | orchestrator | 2025-07-06 20:35:05.049344 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-06 20:35:05.049355 | orchestrator | Sunday 06 July 2025 20:34:59 +0000 (0:00:00.113) 0:00:10.527 *********** 2025-07-06 20:35:05.049365 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049376 | orchestrator | 2025-07-06 20:35:05.049387 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-06 20:35:05.049397 | orchestrator | Sunday 06 July 2025 20:34:59 +0000 (0:00:00.133) 0:00:10.661 *********** 2025-07-06 20:35:05.049408 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049419 | orchestrator | 2025-07-06 20:35:05.049429 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-06 20:35:05.049440 | orchestrator | Sunday 06 July 2025 20:34:59 +0000 (0:00:00.110) 0:00:10.772 *********** 2025-07-06 20:35:05.049470 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:35:05.049481 | orchestrator | 2025-07-06 20:35:05.049492 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-06 20:35:05.049503 | orchestrator | Sunday 06 July 2025 20:35:01 +0000 (0:00:01.362) 0:00:12.134 *********** 2025-07-06 20:35:05.049513 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049524 | orchestrator | 2025-07-06 20:35:05.049535 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-06 20:35:05.049546 | orchestrator | Sunday 06 July 2025 20:35:01 +0000 (0:00:00.289) 0:00:12.424 *********** 2025-07-06 20:35:05.049556 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.049567 | orchestrator | 2025-07-06 20:35:05.049578 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-06 20:35:05.049589 | orchestrator | Sunday 06 July 2025 20:35:01 +0000 (0:00:00.150) 0:00:12.575 *********** 2025-07-06 20:35:05.049600 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:05.049610 | orchestrator | 2025-07-06 20:35:05.049621 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-06 20:35:05.049632 | orchestrator | Sunday 06 July 2025 20:35:01 +0000 (0:00:00.147) 0:00:12.723 *********** 2025-07-06 20:35:05.049642 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.049653 | orchestrator | 2025-07-06 20:35:05.049664 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-06 20:35:05.049675 | orchestrator | Sunday 06 July 2025 20:35:01 +0000 (0:00:00.129) 0:00:12.852 *********** 2025-07-06 20:35:05.049685 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.049696 | orchestrator | 2025-07-06 20:35:05.049706 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-06 20:35:05.049717 | orchestrator | Sunday 06 July 2025 20:35:02 +0000 (0:00:00.345) 0:00:13.198 *********** 2025-07-06 20:35:05.049728 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:05.049824 | orchestrator | 2025-07-06 20:35:05.049838 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-06 20:35:05.049849 | orchestrator | Sunday 06 July 2025 20:35:02 +0000 (0:00:00.246) 0:00:13.445 *********** 2025-07-06 20:35:05.049860 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:05.049871 | orchestrator | 2025-07-06 20:35:05.049881 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:35:05.049892 | orchestrator | Sunday 06 July 2025 20:35:02 +0000 (0:00:00.249) 0:00:13.694 *********** 2025-07-06 20:35:05.049903 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:05.049924 | orchestrator | 2025-07-06 20:35:05.049936 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:35:05.049947 | orchestrator | Sunday 06 July 2025 20:35:04 +0000 (0:00:01.632) 0:00:15.326 *********** 2025-07-06 20:35:05.049957 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:05.049968 | orchestrator | 2025-07-06 20:35:05.049979 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:35:05.049990 | orchestrator | Sunday 06 July 2025 20:35:04 +0000 (0:00:00.256) 0:00:15.583 *********** 2025-07-06 20:35:05.050001 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:05.050148 | orchestrator | 2025-07-06 20:35:05.050171 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:07.520894 | orchestrator | Sunday 06 July 2025 20:35:04 +0000 (0:00:00.253) 0:00:15.837 *********** 2025-07-06 20:35:07.521014 | orchestrator | 2025-07-06 20:35:07.521039 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:07.521057 | orchestrator | Sunday 06 July 2025 20:35:04 +0000 (0:00:00.068) 0:00:15.905 *********** 2025-07-06 20:35:07.521093 | orchestrator | 2025-07-06 20:35:07.521114 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:07.521135 | orchestrator | Sunday 06 July 2025 20:35:04 +0000 (0:00:00.072) 0:00:15.977 *********** 2025-07-06 20:35:07.521155 | orchestrator | 2025-07-06 20:35:07.521173 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-06 20:35:07.521193 | orchestrator | Sunday 06 July 2025 20:35:05 +0000 (0:00:00.073) 0:00:16.051 *********** 2025-07-06 20:35:07.521213 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:07.521232 | orchestrator | 2025-07-06 20:35:07.521252 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:35:07.521274 | orchestrator | Sunday 06 July 2025 20:35:06 +0000 (0:00:01.603) 0:00:17.654 *********** 2025-07-06 20:35:07.521318 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-06 20:35:07.521334 | orchestrator |  "msg": [ 2025-07-06 20:35:07.521354 | orchestrator |  "Validator run completed.", 2025-07-06 20:35:07.521367 | orchestrator |  "You can find the report file here:", 2025-07-06 20:35:07.521378 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-06T20:34:50+00:00-report.json", 2025-07-06 20:35:07.521397 | orchestrator |  "on the following host:", 2025-07-06 20:35:07.521408 | orchestrator |  "testbed-manager" 2025-07-06 20:35:07.521419 | orchestrator |  ] 2025-07-06 20:35:07.521431 | orchestrator | } 2025-07-06 20:35:07.521444 | orchestrator | 2025-07-06 20:35:07.521519 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:35:07.521535 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-06 20:35:07.521549 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:35:07.521562 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:35:07.521575 | orchestrator | 2025-07-06 20:35:07.521588 | orchestrator | 2025-07-06 20:35:07.521601 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:35:07.521612 | orchestrator | Sunday 06 July 2025 20:35:07 +0000 (0:00:00.562) 0:00:18.217 *********** 2025-07-06 20:35:07.521623 | orchestrator | =============================================================================== 2025-07-06 20:35:07.521633 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2025-07-06 20:35:07.521644 | orchestrator | Write report file ------------------------------------------------------- 1.60s 2025-07-06 20:35:07.521655 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2025-07-06 20:35:07.521689 | orchestrator | Gather status data ------------------------------------------------------ 1.36s 2025-07-06 20:35:07.521700 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-07-06 20:35:07.521710 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-07-06 20:35:07.521721 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2025-07-06 20:35:07.521732 | orchestrator | Get timestamp for report file ------------------------------------------- 0.72s 2025-07-06 20:35:07.521743 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-07-06 20:35:07.521753 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-07-06 20:35:07.521764 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-07-06 20:35:07.521775 | orchestrator | Aggregate test results step two ----------------------------------------- 0.33s 2025-07-06 20:35:07.521785 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.33s 2025-07-06 20:35:07.521796 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-07-06 20:35:07.521806 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-07-06 20:35:07.521817 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-07-06 20:35:07.521828 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-07-06 20:35:07.521839 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-07-06 20:35:07.521849 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2025-07-06 20:35:07.521860 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-07-06 20:35:07.844888 | orchestrator | + osism validate ceph-mgrs 2025-07-06 20:35:09.573364 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:35:09.573508 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:35:09.573523 | orchestrator | Registering Redlock._release_script 2025-07-06 20:35:28.900843 | orchestrator | 2025-07-06 20:35:28.901066 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-06 20:35:28.901105 | orchestrator | 2025-07-06 20:35:28.901126 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-06 20:35:28.901145 | orchestrator | Sunday 06 July 2025 20:35:13 +0000 (0:00:00.464) 0:00:00.464 *********** 2025-07-06 20:35:28.901165 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.901183 | orchestrator | 2025-07-06 20:35:28.901201 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-06 20:35:28.901218 | orchestrator | Sunday 06 July 2025 20:35:14 +0000 (0:00:00.624) 0:00:01.089 *********** 2025-07-06 20:35:28.901237 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.901255 | orchestrator | 2025-07-06 20:35:28.901272 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-06 20:35:28.901290 | orchestrator | Sunday 06 July 2025 20:35:15 +0000 (0:00:00.861) 0:00:01.951 *********** 2025-07-06 20:35:28.901308 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.901327 | orchestrator | 2025-07-06 20:35:28.901345 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-06 20:35:28.901363 | orchestrator | Sunday 06 July 2025 20:35:15 +0000 (0:00:00.248) 0:00:02.199 *********** 2025-07-06 20:35:28.901433 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.901458 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:28.901527 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:28.901546 | orchestrator | 2025-07-06 20:35:28.901565 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-06 20:35:28.901584 | orchestrator | Sunday 06 July 2025 20:35:15 +0000 (0:00:00.295) 0:00:02.495 *********** 2025-07-06 20:35:28.901603 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.901622 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:28.901640 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:28.901693 | orchestrator | 2025-07-06 20:35:28.901714 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-06 20:35:28.901733 | orchestrator | Sunday 06 July 2025 20:35:17 +0000 (0:00:01.023) 0:00:03.518 *********** 2025-07-06 20:35:28.901751 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.901791 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:35:28.901810 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:35:28.901828 | orchestrator | 2025-07-06 20:35:28.901846 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-06 20:35:28.901865 | orchestrator | Sunday 06 July 2025 20:35:17 +0000 (0:00:00.330) 0:00:03.849 *********** 2025-07-06 20:35:28.901883 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.901900 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:28.901918 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:28.901935 | orchestrator | 2025-07-06 20:35:28.901953 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:35:28.901970 | orchestrator | Sunday 06 July 2025 20:35:17 +0000 (0:00:00.531) 0:00:04.381 *********** 2025-07-06 20:35:28.901987 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.902003 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:28.902075 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:28.902096 | orchestrator | 2025-07-06 20:35:28.902112 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-06 20:35:28.902128 | orchestrator | Sunday 06 July 2025 20:35:18 +0000 (0:00:00.357) 0:00:04.738 *********** 2025-07-06 20:35:28.902144 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.902160 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:35:28.902175 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:35:28.902191 | orchestrator | 2025-07-06 20:35:28.902207 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-06 20:35:28.902224 | orchestrator | Sunday 06 July 2025 20:35:18 +0000 (0:00:00.307) 0:00:05.046 *********** 2025-07-06 20:35:28.902240 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.902257 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:35:28.902273 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:35:28.902288 | orchestrator | 2025-07-06 20:35:28.902304 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:35:28.902319 | orchestrator | Sunday 06 July 2025 20:35:18 +0000 (0:00:00.319) 0:00:05.366 *********** 2025-07-06 20:35:28.902335 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.902351 | orchestrator | 2025-07-06 20:35:28.902366 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:35:28.902382 | orchestrator | Sunday 06 July 2025 20:35:19 +0000 (0:00:00.789) 0:00:06.156 *********** 2025-07-06 20:35:28.902398 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.902414 | orchestrator | 2025-07-06 20:35:28.902431 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:35:28.902447 | orchestrator | Sunday 06 July 2025 20:35:19 +0000 (0:00:00.260) 0:00:06.417 *********** 2025-07-06 20:35:28.902485 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.902503 | orchestrator | 2025-07-06 20:35:28.902519 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:28.902535 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.249) 0:00:06.667 *********** 2025-07-06 20:35:28.902550 | orchestrator | 2025-07-06 20:35:28.902566 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:28.902583 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.080) 0:00:06.748 *********** 2025-07-06 20:35:28.902599 | orchestrator | 2025-07-06 20:35:28.902615 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:28.902632 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.085) 0:00:06.833 *********** 2025-07-06 20:35:28.902648 | orchestrator | 2025-07-06 20:35:28.902663 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:35:28.902676 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.073) 0:00:06.906 *********** 2025-07-06 20:35:28.902698 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.902708 | orchestrator | 2025-07-06 20:35:28.902718 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-06 20:35:28.902732 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.246) 0:00:07.153 *********** 2025-07-06 20:35:28.902747 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.902763 | orchestrator | 2025-07-06 20:35:28.902808 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-06 20:35:28.902826 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.229) 0:00:07.383 *********** 2025-07-06 20:35:28.902841 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.902851 | orchestrator | 2025-07-06 20:35:28.902861 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-06 20:35:28.902870 | orchestrator | Sunday 06 July 2025 20:35:20 +0000 (0:00:00.110) 0:00:07.493 *********** 2025-07-06 20:35:28.902880 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:35:28.902889 | orchestrator | 2025-07-06 20:35:28.902898 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-06 20:35:28.902907 | orchestrator | Sunday 06 July 2025 20:35:22 +0000 (0:00:01.917) 0:00:09.410 *********** 2025-07-06 20:35:28.902917 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.902926 | orchestrator | 2025-07-06 20:35:28.902936 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-06 20:35:28.902945 | orchestrator | Sunday 06 July 2025 20:35:23 +0000 (0:00:00.261) 0:00:09.672 *********** 2025-07-06 20:35:28.902954 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.902963 | orchestrator | 2025-07-06 20:35:28.902973 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-06 20:35:28.902982 | orchestrator | Sunday 06 July 2025 20:35:23 +0000 (0:00:00.715) 0:00:10.387 *********** 2025-07-06 20:35:28.902991 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.903001 | orchestrator | 2025-07-06 20:35:28.903010 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-06 20:35:28.903019 | orchestrator | Sunday 06 July 2025 20:35:24 +0000 (0:00:00.130) 0:00:10.518 *********** 2025-07-06 20:35:28.903029 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:35:28.903038 | orchestrator | 2025-07-06 20:35:28.903048 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-06 20:35:28.903057 | orchestrator | Sunday 06 July 2025 20:35:24 +0000 (0:00:00.153) 0:00:10.672 *********** 2025-07-06 20:35:28.903066 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.903083 | orchestrator | 2025-07-06 20:35:28.903098 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-06 20:35:28.903113 | orchestrator | Sunday 06 July 2025 20:35:24 +0000 (0:00:00.280) 0:00:10.952 *********** 2025-07-06 20:35:28.903127 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:35:28.903142 | orchestrator | 2025-07-06 20:35:28.903157 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:35:28.903171 | orchestrator | Sunday 06 July 2025 20:35:24 +0000 (0:00:00.244) 0:00:11.196 *********** 2025-07-06 20:35:28.903186 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.903200 | orchestrator | 2025-07-06 20:35:28.903215 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:35:28.903230 | orchestrator | Sunday 06 July 2025 20:35:25 +0000 (0:00:01.226) 0:00:12.423 *********** 2025-07-06 20:35:28.903246 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.903261 | orchestrator | 2025-07-06 20:35:28.903276 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:35:28.903292 | orchestrator | Sunday 06 July 2025 20:35:26 +0000 (0:00:00.263) 0:00:12.687 *********** 2025-07-06 20:35:28.903307 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.903322 | orchestrator | 2025-07-06 20:35:28.903338 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:28.903429 | orchestrator | Sunday 06 July 2025 20:35:26 +0000 (0:00:00.266) 0:00:12.954 *********** 2025-07-06 20:35:28.903450 | orchestrator | 2025-07-06 20:35:28.903492 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:28.903509 | orchestrator | Sunday 06 July 2025 20:35:26 +0000 (0:00:00.076) 0:00:13.031 *********** 2025-07-06 20:35:28.903525 | orchestrator | 2025-07-06 20:35:28.903540 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:28.903555 | orchestrator | Sunday 06 July 2025 20:35:26 +0000 (0:00:00.082) 0:00:13.113 *********** 2025-07-06 20:35:28.903572 | orchestrator | 2025-07-06 20:35:28.903587 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-06 20:35:28.903603 | orchestrator | Sunday 06 July 2025 20:35:26 +0000 (0:00:00.081) 0:00:13.195 *********** 2025-07-06 20:35:28.903618 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:28.903633 | orchestrator | 2025-07-06 20:35:28.903649 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:35:28.903664 | orchestrator | Sunday 06 July 2025 20:35:28 +0000 (0:00:01.788) 0:00:14.983 *********** 2025-07-06 20:35:28.903681 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-06 20:35:28.903697 | orchestrator |  "msg": [ 2025-07-06 20:35:28.903713 | orchestrator |  "Validator run completed.", 2025-07-06 20:35:28.903730 | orchestrator |  "You can find the report file here:", 2025-07-06 20:35:28.903746 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-06T20:35:14+00:00-report.json", 2025-07-06 20:35:28.903764 | orchestrator |  "on the following host:", 2025-07-06 20:35:28.903781 | orchestrator |  "testbed-manager" 2025-07-06 20:35:28.903797 | orchestrator |  ] 2025-07-06 20:35:28.903815 | orchestrator | } 2025-07-06 20:35:28.903831 | orchestrator | 2025-07-06 20:35:28.903847 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:35:28.903866 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:35:28.903883 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:35:28.903917 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:35:29.205848 | orchestrator | 2025-07-06 20:35:29.205950 | orchestrator | 2025-07-06 20:35:29.205964 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:35:29.205978 | orchestrator | Sunday 06 July 2025 20:35:28 +0000 (0:00:00.408) 0:00:15.391 *********** 2025-07-06 20:35:29.205989 | orchestrator | =============================================================================== 2025-07-06 20:35:29.206058 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.92s 2025-07-06 20:35:29.206073 | orchestrator | Write report file ------------------------------------------------------- 1.79s 2025-07-06 20:35:29.206084 | orchestrator | Aggregate test results step one ----------------------------------------- 1.23s 2025-07-06 20:35:29.206095 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-07-06 20:35:29.206106 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-07-06 20:35:29.206117 | orchestrator | Aggregate test results step one ----------------------------------------- 0.79s 2025-07-06 20:35:29.206128 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.72s 2025-07-06 20:35:29.206138 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-07-06 20:35:29.206149 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-07-06 20:35:29.206160 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-07-06 20:35:29.206192 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2025-07-06 20:35:29.206203 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2025-07-06 20:35:29.206213 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-07-06 20:35:29.206224 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-07-06 20:35:29.206235 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-07-06 20:35:29.206250 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-07-06 20:35:29.206261 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-07-06 20:35:29.206272 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-07-06 20:35:29.206283 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-07-06 20:35:29.206293 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-07-06 20:35:29.560815 | orchestrator | + osism validate ceph-osds 2025-07-06 20:35:31.301962 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:35:31.302120 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:35:31.302144 | orchestrator | Registering Redlock._release_script 2025-07-06 20:35:40.266692 | orchestrator | 2025-07-06 20:35:40.266772 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-06 20:35:40.266778 | orchestrator | 2025-07-06 20:35:40.266783 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-06 20:35:40.266788 | orchestrator | Sunday 06 July 2025 20:35:35 +0000 (0:00:00.428) 0:00:00.428 *********** 2025-07-06 20:35:40.266792 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:40.266796 | orchestrator | 2025-07-06 20:35:40.266800 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:35:40.266804 | orchestrator | Sunday 06 July 2025 20:35:36 +0000 (0:00:00.643) 0:00:01.072 *********** 2025-07-06 20:35:40.266808 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:40.266812 | orchestrator | 2025-07-06 20:35:40.266816 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-06 20:35:40.266819 | orchestrator | Sunday 06 July 2025 20:35:36 +0000 (0:00:00.407) 0:00:01.479 *********** 2025-07-06 20:35:40.266823 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:35:40.266827 | orchestrator | 2025-07-06 20:35:40.266831 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-06 20:35:40.266834 | orchestrator | Sunday 06 July 2025 20:35:37 +0000 (0:00:00.958) 0:00:02.437 *********** 2025-07-06 20:35:40.266838 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:40.266843 | orchestrator | 2025-07-06 20:35:40.266847 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-06 20:35:40.266851 | orchestrator | Sunday 06 July 2025 20:35:37 +0000 (0:00:00.123) 0:00:02.560 *********** 2025-07-06 20:35:40.266855 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:40.266858 | orchestrator | 2025-07-06 20:35:40.266862 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-06 20:35:40.266866 | orchestrator | Sunday 06 July 2025 20:35:38 +0000 (0:00:00.144) 0:00:02.704 *********** 2025-07-06 20:35:40.266870 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:40.266873 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:35:40.266877 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:35:40.266881 | orchestrator | 2025-07-06 20:35:40.266885 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-06 20:35:40.266888 | orchestrator | Sunday 06 July 2025 20:35:38 +0000 (0:00:00.328) 0:00:03.033 *********** 2025-07-06 20:35:40.266892 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:40.266896 | orchestrator | 2025-07-06 20:35:40.266900 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-06 20:35:40.266919 | orchestrator | Sunday 06 July 2025 20:35:38 +0000 (0:00:00.148) 0:00:03.181 *********** 2025-07-06 20:35:40.266923 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:40.266927 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:40.266931 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:40.266934 | orchestrator | 2025-07-06 20:35:40.266938 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-06 20:35:40.266942 | orchestrator | Sunday 06 July 2025 20:35:38 +0000 (0:00:00.320) 0:00:03.502 *********** 2025-07-06 20:35:40.266945 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:40.266949 | orchestrator | 2025-07-06 20:35:40.266953 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:35:40.266957 | orchestrator | Sunday 06 July 2025 20:35:39 +0000 (0:00:00.567) 0:00:04.070 *********** 2025-07-06 20:35:40.266960 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:40.266964 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:40.266968 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:40.266971 | orchestrator | 2025-07-06 20:35:40.266975 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-06 20:35:40.266979 | orchestrator | Sunday 06 July 2025 20:35:39 +0000 (0:00:00.566) 0:00:04.636 *********** 2025-07-06 20:35:40.266984 | orchestrator | skipping: [testbed-node-3] => (item={'id': '636a85c98ddc269a4544000d9bb2188eb3ef774e6a52ce718a3c84170b96ffa9', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:35:40.266990 | orchestrator | skipping: [testbed-node-3] => (item={'id': '88b537a6c151cb93eab3ff0288baa8e66df201d2f9b344b5e029d773f338cc44', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:35:40.266994 | orchestrator | skipping: [testbed-node-3] => (item={'id': '48d5bffc85ba9a5cbd170b4548f09f015f94cf72255956ab0e14518adb824896', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-06 20:35:40.267008 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9f36a747153722e41fa2fbb790fb8dd4d2330abff2741a6094f84a32b91a44c9', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-06 20:35:40.267014 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e4d1f1d60177a4e1e58ca04f6885fe3cd22065e7c00132ec0ee321cca960ddc9', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-06 20:35:40.267027 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cbd832e6e90ab3cc7ab668240d71d93c692eda9d1490e6191710cefad6960c8d', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-06 20:35:40.267031 | orchestrator | skipping: [testbed-node-3] => (item={'id': '39eb909f729b0dcdeea3defce6a31a14495aad624db7dc73e8da0df6e943d75d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:35:40.267041 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16280d1b1b05638f0758e025f5b285acc27d853410596cc8da75b9bf30946e5b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:35:40.267045 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5a34b76d91f0dbaab7d452d976c9b1d3dade79802581ec19a6afebd45482a545', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:35:40.267053 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05ed12e431f3571246c220f5ba48a938b624cf2a6f88accf204e3e0b66835240', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-06 20:35:40.267058 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a83f4e86ab9ee6b7f92d0d86cf7e2e902fe5adcd9fad83d29396bee45cec507', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:35:40.267063 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7a42aae4492492a096dd462789fc98b45fc0dc6017f6f3cefd574128c4ede2c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-06 20:35:40.267068 | orchestrator | ok: [testbed-node-3] => (item={'id': '2436d6ac53f0eec117766db0162cc79b58aee5e49f274e898173a556ea1067ea', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-06 20:35:40.267072 | orchestrator | ok: [testbed-node-3] => (item={'id': '5d889407af3b65982edeab9d9d33799a4287ec106b1c51803ccb9cd08e795c8d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-06 20:35:40.267076 | orchestrator | skipping: [testbed-node-3] => (item={'id': '79e7a09fdf59919056b80282d99a10b073c646817174bb57213c618e790f9001', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-06 20:35:40.267080 | orchestrator | skipping: [testbed-node-3] => (item={'id': '42795e327db9f058b82bf3871d9a59110f9b18010b2dcc32a200fed9b2627cc2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:35:40.267084 | orchestrator | skipping: [testbed-node-3] => (item={'id': '797abfb93f4a77cc92eed8e93dc3129336b666eba07b128de4e21dbb979a7cc8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-06 20:35:40.267088 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3adea462d1502b696a0a19db4e4d703965c6f0944b3b71d9f8f076b0a66c5801', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:35:40.267092 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9ace3d421f631230780156a63cfca00bac43738ea04ac1079f51b4d68b421660', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:35:40.267096 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'da39de8964c7330b3572b54f8defbbcf7766f52d10e76bd0f75e5c9efef23b3f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:35:40.267102 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c9810a90024a49afbdd5751b9617fea8668bff85c4cbb9c9e2c339f755c52af', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:35:40.543117 | orchestrator | skipping: [testbed-node-4] => (item={'id': '49a6502dbe477badc626b0037ab4b15faee2f1a78ea6a8e1ec01f95ba593eba7', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:35:40.543219 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd7fda6f401a3d7d4decf67dc62968726d8b65e2a897a10852c82d20e3b73ac17', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-06 20:35:40.543256 | orchestrator | skipping: [testbed-node-4] => (item={'id': '570044dba8d52a7a83069c6730d60f37c80a4f5ad99861c7fb194c0d4cd2f71f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-06 20:35:40.543270 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c78676f33853da4de668c20ec37e2aa2c274cd292b94eb559d9dfb5292379b2', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-06 20:35:40.543281 | orchestrator | skipping: [testbed-node-4] => (item={'id': '044f28a98ec8b3ca034fb1e2a7cc7753b7d37045bfc73bde78aef2e5081f914a', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-06 20:35:40.543293 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3bceec31eaf6c3177cfb50f7600d3678c6160bc1c5eeae687cc90b9168232d4f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:35:40.543305 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fc311ec568e8ce86ba411920b74a415c390446f8bdc6d098552ea820ce0d8d53', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:35:40.543316 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bb09679933dc0a3ed0f913bf246864a55f1fe04f3afe7f0b5ed4ea8001f5b81a', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:35:40.543327 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd285c285e3d25add05f0bb0f2cacd74ec23ef15578820d0ca2c951a940be67f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-06 20:35:40.543364 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b50e8678e5167cfc382ea0835728964273417a97715841075d4893a64e2607d4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:35:40.543377 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02fbdded13dc6a348722fecbf55338d7a3c410a71bf56b109e6a27f432c43f03', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-06 20:35:40.543391 | orchestrator | ok: [testbed-node-4] => (item={'id': '7347f822d8b25c41daf88332bef5ecdb040aa504e2f14b0f30a6907116947f01', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-06 20:35:40.543408 | orchestrator | ok: [testbed-node-4] => (item={'id': '90c2bb61343cbacd30b880c57bb27808eb701087e0670cc26114a3bcc42e10b6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-06 20:35:40.543420 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec4a5466cff819db87c70d18f4c8174143307b63592818291839ff2dff279180', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-06 20:35:40.543448 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd7879c731debfbd3bfbd037e4a9e7f606612d64e8567b25b35d4bef441a8cea8', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:35:40.543468 | orchestrator | skipping: [testbed-node-4] => (item={'id': '112fd3ecffd4174da505358f60cf0f50b4f687258c323d97df2add2c3cf4d5c8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-06 20:35:40.543541 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9042dd1bd49cdc5d6ce00f3606b5a30ea28fb892e2a578048e8fe1c9b506afa5', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:35:40.543553 | orchestrator | skipping: [testbed-node-4] => (item={'id': '77f36fb9c87e78ed568107a8c37eded92b794c492e887df781c23a8fec85263d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:35:40.543565 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9087b5289f114396744c2a3fcad55df36545a519286d8f94f9249ba7c9d66bc9', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:35:40.543576 | orchestrator | skipping: [testbed-node-5] => (item={'id': '570e2e7af83563a5fa4854cb3857947f0cdd3cabf0116be581c53df2678acdea', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:35:40.543587 | orchestrator | skipping: [testbed-node-5] => (item={'id': '161a71da053061eb70ac763a6acf1f81b598dcec0d8d82cb86afa5fe618a02b4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:35:40.543598 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e1e54f5ed8699581cc9c603817a8e260826c3034100ea3dfd36d1cc6610a2e05', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-06 20:35:40.543609 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'afc5093fe353cf77a139af9d89522e243f625f345b137cd206a35acaa436e52a', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-06 20:35:40.543620 | orchestrator | skipping: [testbed-node-5] => (item={'id': '28d16687c5264e7f78b0eeb99f1f4ca9b39344610067753b45d84fd086c9871e', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-06 20:35:40.543631 | orchestrator | skipping: [testbed-node-5] => (item={'id': '047b09f4315c1b1c6f58a56c0b66a9dc7fb625aad688ed220bb36cd3623de24e', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-06 20:35:40.543643 | orchestrator | skipping: [testbed-node-5] => (item={'id': '176550c4b7ed7d2fc56224fee8ca030fa9f0fdeda41772f4f38e2bd7c2299844', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:35:40.543659 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2ed0d94f5204062885ab71353ec31070c703ed1dd67cf2ed01967a767f657eb6', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:35:40.543671 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bdd32eaf58c95e03d759f55fb32071ea498c1f4b8a18840063b367e633d0c901', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:35:40.543682 | orchestrator | skipping: [testbed-node-5] => (item={'id': '67f02ac29dd162394e275b3e3f4748cb961416f3aa0690a8d08ea0771ff41ae0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-06 20:35:40.543708 | orchestrator | skipping: [testbed-node-5] => (item={'id': '506d083570e96795514479101ece90ebb63e936efb99e5fd639d264c0e3263f8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:35:48.638964 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fbcf7e9583b7cd8e38c75e27d2c58a89353138be4b17022064eb11cd3b23b4f5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-06 20:35:48.639076 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c2518446c8971f9c1a59b7e51bdb88eb6abc132cd191c11b1978426eb11ed297', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-06 20:35:48.639095 | orchestrator | ok: [testbed-node-5] => (item={'id': '632ba22e8e246f779a5106b57edaab8970a2134dd8d041914bcd1bac8a7ebf1d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-06 20:35:48.639108 | orchestrator | skipping: [testbed-node-5] => (item={'id': '35f84ec649063d0b2cc78995d2e910888fbceb56d11c49d6823e008aa7537501', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-06 20:35:48.639121 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5ca3cfcce4b068a8b8cddb5c5c7ce9f5ae9475bfc8a0ad689c0789d9143b4f6d', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:35:48.639134 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd9d01ec58911703e8abda19e5f0f44c2b99f14b1bb3a867a5fc998eea22f56a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-06 20:35:48.639146 | orchestrator | skipping: [testbed-node-5] => (item={'id': '52aae54f95d146e990f93570c77b7f0a79b404386917f43752afd7230a22a54e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:35:48.639158 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a5a2b0fb7c46a4d07b5eae09f8e3b6a1a942ebf7d708bae0f74198e1fbd44fa9', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:35:48.639169 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c0a1ff0a7b17859925eb38fdde4e1faade3873328ea77cb556d865384c9a5425', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:35:48.639181 | orchestrator | 2025-07-06 20:35:48.639194 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-06 20:35:48.639206 | orchestrator | Sunday 06 July 2025 20:35:40 +0000 (0:00:00.575) 0:00:05.212 *********** 2025-07-06 20:35:48.639217 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.639229 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:48.639240 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:48.639251 | orchestrator | 2025-07-06 20:35:48.639262 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-06 20:35:48.639273 | orchestrator | Sunday 06 July 2025 20:35:40 +0000 (0:00:00.316) 0:00:05.528 *********** 2025-07-06 20:35:48.639285 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.639297 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:35:48.639308 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:35:48.639347 | orchestrator | 2025-07-06 20:35:48.639359 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-06 20:35:48.639370 | orchestrator | Sunday 06 July 2025 20:35:41 +0000 (0:00:00.464) 0:00:05.993 *********** 2025-07-06 20:35:48.639381 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.639392 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:48.639403 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:48.639414 | orchestrator | 2025-07-06 20:35:48.639439 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:35:48.639451 | orchestrator | Sunday 06 July 2025 20:35:41 +0000 (0:00:00.310) 0:00:06.304 *********** 2025-07-06 20:35:48.639461 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.639473 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:48.639533 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:48.639547 | orchestrator | 2025-07-06 20:35:48.639560 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-06 20:35:48.639572 | orchestrator | Sunday 06 July 2025 20:35:41 +0000 (0:00:00.280) 0:00:06.584 *********** 2025-07-06 20:35:48.639585 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-06 20:35:48.639600 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-06 20:35:48.639613 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.639626 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-06 20:35:48.639639 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-06 20:35:48.639669 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:35:48.639683 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-06 20:35:48.639696 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-06 20:35:48.639708 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:35:48.639720 | orchestrator | 2025-07-06 20:35:48.639733 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-06 20:35:48.639746 | orchestrator | Sunday 06 July 2025 20:35:42 +0000 (0:00:00.329) 0:00:06.913 *********** 2025-07-06 20:35:48.639758 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.639770 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:48.639782 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:48.639794 | orchestrator | 2025-07-06 20:35:48.639807 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-06 20:35:48.639819 | orchestrator | Sunday 06 July 2025 20:35:42 +0000 (0:00:00.495) 0:00:07.409 *********** 2025-07-06 20:35:48.639832 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.639843 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:35:48.639854 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:35:48.639865 | orchestrator | 2025-07-06 20:35:48.639875 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-06 20:35:48.639886 | orchestrator | Sunday 06 July 2025 20:35:43 +0000 (0:00:00.293) 0:00:07.702 *********** 2025-07-06 20:35:48.639897 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.639908 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:35:48.639919 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:35:48.639929 | orchestrator | 2025-07-06 20:35:48.639940 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-06 20:35:48.639951 | orchestrator | Sunday 06 July 2025 20:35:43 +0000 (0:00:00.304) 0:00:08.006 *********** 2025-07-06 20:35:48.639962 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.639973 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:48.639983 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:48.639995 | orchestrator | 2025-07-06 20:35:48.640006 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:35:48.640016 | orchestrator | Sunday 06 July 2025 20:35:43 +0000 (0:00:00.286) 0:00:08.293 *********** 2025-07-06 20:35:48.640037 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.640049 | orchestrator | 2025-07-06 20:35:48.640059 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:35:48.640070 | orchestrator | Sunday 06 July 2025 20:35:44 +0000 (0:00:00.657) 0:00:08.951 *********** 2025-07-06 20:35:48.640081 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.640092 | orchestrator | 2025-07-06 20:35:48.640103 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:35:48.640114 | orchestrator | Sunday 06 July 2025 20:35:44 +0000 (0:00:00.245) 0:00:09.196 *********** 2025-07-06 20:35:48.640124 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.640135 | orchestrator | 2025-07-06 20:35:48.640146 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:48.640157 | orchestrator | Sunday 06 July 2025 20:35:44 +0000 (0:00:00.264) 0:00:09.461 *********** 2025-07-06 20:35:48.640168 | orchestrator | 2025-07-06 20:35:48.640179 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:48.640189 | orchestrator | Sunday 06 July 2025 20:35:44 +0000 (0:00:00.085) 0:00:09.547 *********** 2025-07-06 20:35:48.640200 | orchestrator | 2025-07-06 20:35:48.640211 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:35:48.640222 | orchestrator | Sunday 06 July 2025 20:35:44 +0000 (0:00:00.069) 0:00:09.617 *********** 2025-07-06 20:35:48.640232 | orchestrator | 2025-07-06 20:35:48.640243 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:35:48.640254 | orchestrator | Sunday 06 July 2025 20:35:44 +0000 (0:00:00.066) 0:00:09.684 *********** 2025-07-06 20:35:48.640265 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.640276 | orchestrator | 2025-07-06 20:35:48.640287 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-06 20:35:48.640297 | orchestrator | Sunday 06 July 2025 20:35:45 +0000 (0:00:00.231) 0:00:09.915 *********** 2025-07-06 20:35:48.640308 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:35:48.640319 | orchestrator | 2025-07-06 20:35:48.640330 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:35:48.640341 | orchestrator | Sunday 06 July 2025 20:35:45 +0000 (0:00:00.258) 0:00:10.174 *********** 2025-07-06 20:35:48.640351 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.640362 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:35:48.640373 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:35:48.640384 | orchestrator | 2025-07-06 20:35:48.640395 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-06 20:35:48.640406 | orchestrator | Sunday 06 July 2025 20:35:45 +0000 (0:00:00.288) 0:00:10.462 *********** 2025-07-06 20:35:48.640416 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.640427 | orchestrator | 2025-07-06 20:35:48.640438 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-06 20:35:48.640449 | orchestrator | Sunday 06 July 2025 20:35:46 +0000 (0:00:00.690) 0:00:11.152 *********** 2025-07-06 20:35:48.640460 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:35:48.640471 | orchestrator | 2025-07-06 20:35:48.640498 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-06 20:35:48.640509 | orchestrator | Sunday 06 July 2025 20:35:48 +0000 (0:00:01.594) 0:00:12.746 *********** 2025-07-06 20:35:48.640520 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.640530 | orchestrator | 2025-07-06 20:35:48.640541 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-06 20:35:48.640552 | orchestrator | Sunday 06 July 2025 20:35:48 +0000 (0:00:00.136) 0:00:12.883 *********** 2025-07-06 20:35:48.640563 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:35:48.640574 | orchestrator | 2025-07-06 20:35:48.640585 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-06 20:35:48.640596 | orchestrator | Sunday 06 July 2025 20:35:48 +0000 (0:00:00.310) 0:00:13.193 *********** 2025-07-06 20:35:48.640620 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:36:00.802276 | orchestrator | 2025-07-06 20:36:00.802395 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-06 20:36:00.802414 | orchestrator | Sunday 06 July 2025 20:35:48 +0000 (0:00:00.123) 0:00:13.317 *********** 2025-07-06 20:36:00.802426 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.802438 | orchestrator | 2025-07-06 20:36:00.802450 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:36:00.802461 | orchestrator | Sunday 06 July 2025 20:35:48 +0000 (0:00:00.139) 0:00:13.457 *********** 2025-07-06 20:36:00.802472 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.802484 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.802528 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.802539 | orchestrator | 2025-07-06 20:36:00.802550 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-06 20:36:00.802561 | orchestrator | Sunday 06 July 2025 20:35:49 +0000 (0:00:00.319) 0:00:13.776 *********** 2025-07-06 20:36:00.802632 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:36:00.802654 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:36:00.802673 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:36:00.802691 | orchestrator | 2025-07-06 20:36:00.802706 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-06 20:36:00.802717 | orchestrator | Sunday 06 July 2025 20:35:51 +0000 (0:00:02.514) 0:00:16.291 *********** 2025-07-06 20:36:00.802729 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.802747 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.802765 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.802783 | orchestrator | 2025-07-06 20:36:00.802895 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-06 20:36:00.802911 | orchestrator | Sunday 06 July 2025 20:35:51 +0000 (0:00:00.326) 0:00:16.617 *********** 2025-07-06 20:36:00.802923 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.802936 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.802948 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.802960 | orchestrator | 2025-07-06 20:36:00.802974 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-06 20:36:00.802988 | orchestrator | Sunday 06 July 2025 20:35:52 +0000 (0:00:00.465) 0:00:17.083 *********** 2025-07-06 20:36:00.803001 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:36:00.803013 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:36:00.803026 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:36:00.803038 | orchestrator | 2025-07-06 20:36:00.803051 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-06 20:36:00.803063 | orchestrator | Sunday 06 July 2025 20:35:52 +0000 (0:00:00.293) 0:00:17.376 *********** 2025-07-06 20:36:00.803075 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.803088 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.803100 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.803112 | orchestrator | 2025-07-06 20:36:00.803125 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-06 20:36:00.803137 | orchestrator | Sunday 06 July 2025 20:35:53 +0000 (0:00:00.505) 0:00:17.882 *********** 2025-07-06 20:36:00.803150 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:36:00.803167 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:36:00.803185 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:36:00.803203 | orchestrator | 2025-07-06 20:36:00.803222 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-06 20:36:00.803241 | orchestrator | Sunday 06 July 2025 20:35:53 +0000 (0:00:00.293) 0:00:18.176 *********** 2025-07-06 20:36:00.803254 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:36:00.803265 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:36:00.803276 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:36:00.803287 | orchestrator | 2025-07-06 20:36:00.803298 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:36:00.803333 | orchestrator | Sunday 06 July 2025 20:35:53 +0000 (0:00:00.292) 0:00:18.468 *********** 2025-07-06 20:36:00.803344 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.803355 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.803365 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.803376 | orchestrator | 2025-07-06 20:36:00.803387 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-06 20:36:00.803398 | orchestrator | Sunday 06 July 2025 20:35:54 +0000 (0:00:00.476) 0:00:18.945 *********** 2025-07-06 20:36:00.803409 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.803419 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.803430 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.803440 | orchestrator | 2025-07-06 20:36:00.803454 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-06 20:36:00.803473 | orchestrator | Sunday 06 July 2025 20:35:54 +0000 (0:00:00.738) 0:00:19.683 *********** 2025-07-06 20:36:00.803529 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.803541 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.803551 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.803562 | orchestrator | 2025-07-06 20:36:00.803573 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-06 20:36:00.803590 | orchestrator | Sunday 06 July 2025 20:35:55 +0000 (0:00:00.308) 0:00:19.992 *********** 2025-07-06 20:36:00.803602 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:36:00.803612 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:36:00.803623 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:36:00.803634 | orchestrator | 2025-07-06 20:36:00.803645 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-06 20:36:00.803656 | orchestrator | Sunday 06 July 2025 20:35:55 +0000 (0:00:00.307) 0:00:20.300 *********** 2025-07-06 20:36:00.803667 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:36:00.803677 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:36:00.803689 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:36:00.803700 | orchestrator | 2025-07-06 20:36:00.803711 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-06 20:36:00.803722 | orchestrator | Sunday 06 July 2025 20:35:56 +0000 (0:00:00.499) 0:00:20.799 *********** 2025-07-06 20:36:00.803733 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:36:00.803744 | orchestrator | 2025-07-06 20:36:00.803755 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-06 20:36:00.803766 | orchestrator | Sunday 06 July 2025 20:35:56 +0000 (0:00:00.247) 0:00:21.046 *********** 2025-07-06 20:36:00.803777 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:36:00.803788 | orchestrator | 2025-07-06 20:36:00.803819 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:36:00.803832 | orchestrator | Sunday 06 July 2025 20:35:56 +0000 (0:00:00.238) 0:00:21.285 *********** 2025-07-06 20:36:00.803843 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:36:00.803853 | orchestrator | 2025-07-06 20:36:00.803864 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:36:00.803920 | orchestrator | Sunday 06 July 2025 20:35:58 +0000 (0:00:01.598) 0:00:22.884 *********** 2025-07-06 20:36:00.803933 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:36:00.803945 | orchestrator | 2025-07-06 20:36:00.803955 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:36:00.803966 | orchestrator | Sunday 06 July 2025 20:35:58 +0000 (0:00:00.251) 0:00:23.135 *********** 2025-07-06 20:36:00.803977 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:36:00.803988 | orchestrator | 2025-07-06 20:36:00.803999 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:36:00.804010 | orchestrator | Sunday 06 July 2025 20:35:58 +0000 (0:00:00.241) 0:00:23.377 *********** 2025-07-06 20:36:00.804021 | orchestrator | 2025-07-06 20:36:00.804032 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:36:00.804053 | orchestrator | Sunday 06 July 2025 20:35:58 +0000 (0:00:00.065) 0:00:23.443 *********** 2025-07-06 20:36:00.804064 | orchestrator | 2025-07-06 20:36:00.804075 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:36:00.804085 | orchestrator | Sunday 06 July 2025 20:35:58 +0000 (0:00:00.066) 0:00:23.509 *********** 2025-07-06 20:36:00.804096 | orchestrator | 2025-07-06 20:36:00.804107 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-06 20:36:00.804118 | orchestrator | Sunday 06 July 2025 20:35:58 +0000 (0:00:00.069) 0:00:23.578 *********** 2025-07-06 20:36:00.804129 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:36:00.804140 | orchestrator | 2025-07-06 20:36:00.804150 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:36:00.804161 | orchestrator | Sunday 06 July 2025 20:36:00 +0000 (0:00:01.305) 0:00:24.883 *********** 2025-07-06 20:36:00.804172 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-06 20:36:00.804183 | orchestrator |  "msg": [ 2025-07-06 20:36:00.804194 | orchestrator |  "Validator run completed.", 2025-07-06 20:36:00.804205 | orchestrator |  "You can find the report file here:", 2025-07-06 20:36:00.804216 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-06T20:35:36+00:00-report.json", 2025-07-06 20:36:00.804228 | orchestrator |  "on the following host:", 2025-07-06 20:36:00.804239 | orchestrator |  "testbed-manager" 2025-07-06 20:36:00.804250 | orchestrator |  ] 2025-07-06 20:36:00.804261 | orchestrator | } 2025-07-06 20:36:00.804272 | orchestrator | 2025-07-06 20:36:00.804283 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:36:00.804295 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-06 20:36:00.804308 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:36:00.804319 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:36:00.804330 | orchestrator | 2025-07-06 20:36:00.804340 | orchestrator | 2025-07-06 20:36:00.804351 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:36:00.804362 | orchestrator | Sunday 06 July 2025 20:36:00 +0000 (0:00:00.571) 0:00:25.455 *********** 2025-07-06 20:36:00.804373 | orchestrator | =============================================================================== 2025-07-06 20:36:00.804384 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.51s 2025-07-06 20:36:00.804394 | orchestrator | Aggregate test results step one ----------------------------------------- 1.60s 2025-07-06 20:36:00.804405 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2025-07-06 20:36:00.804416 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2025-07-06 20:36:00.804427 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-07-06 20:36:00.804442 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.74s 2025-07-06 20:36:00.804454 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.69s 2025-07-06 20:36:00.804464 | orchestrator | Aggregate test results step one ----------------------------------------- 0.66s 2025-07-06 20:36:00.804476 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-07-06 20:36:00.804510 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.58s 2025-07-06 20:36:00.804529 | orchestrator | Print report file information ------------------------------------------- 0.57s 2025-07-06 20:36:00.804540 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.57s 2025-07-06 20:36:00.804551 | orchestrator | Prepare test data ------------------------------------------------------- 0.57s 2025-07-06 20:36:00.804570 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.51s 2025-07-06 20:36:00.804581 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.50s 2025-07-06 20:36:00.804592 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2025-07-06 20:36:00.804612 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-07-06 20:36:01.055801 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2025-07-06 20:36:01.055925 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.46s 2025-07-06 20:36:01.055940 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.41s 2025-07-06 20:36:01.297007 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-06 20:36:01.306569 | orchestrator | + set -e 2025-07-06 20:36:01.308695 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 20:36:01.308765 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 20:36:01.308787 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 20:36:01.308807 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 20:36:01.308826 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 20:36:01.308846 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 20:36:01.308868 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 20:36:01.308887 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 20:36:01.308905 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 20:36:01.308923 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 20:36:01.308942 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 20:36:01.308959 | orchestrator | ++ export ARA=false 2025-07-06 20:36:01.308977 | orchestrator | ++ ARA=false 2025-07-06 20:36:01.308996 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 20:36:01.309015 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 20:36:01.309035 | orchestrator | ++ export TEMPEST=false 2025-07-06 20:36:01.309053 | orchestrator | ++ TEMPEST=false 2025-07-06 20:36:01.309071 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 20:36:01.309090 | orchestrator | ++ IS_ZUUL=true 2025-07-06 20:36:01.309109 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 20:36:01.309127 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.103 2025-07-06 20:36:01.309146 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 20:36:01.309163 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 20:36:01.309181 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 20:36:01.309198 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 20:36:01.309216 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 20:36:01.309234 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 20:36:01.309251 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 20:36:01.309269 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 20:36:01.309288 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-06 20:36:01.309307 | orchestrator | + source /etc/os-release 2025-07-06 20:36:01.309326 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-06 20:36:01.309345 | orchestrator | ++ NAME=Ubuntu 2025-07-06 20:36:01.309364 | orchestrator | ++ VERSION_ID=24.04 2025-07-06 20:36:01.309382 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-06 20:36:01.309399 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-06 20:36:01.309415 | orchestrator | ++ ID=ubuntu 2025-07-06 20:36:01.309432 | orchestrator | ++ ID_LIKE=debian 2025-07-06 20:36:01.309452 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-06 20:36:01.309471 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-06 20:36:01.309547 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-06 20:36:01.309569 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-06 20:36:01.309590 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-06 20:36:01.309608 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-06 20:36:01.309625 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-06 20:36:01.309643 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-06 20:36:01.309663 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-06 20:36:01.345873 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-06 20:36:24.454311 | orchestrator | 2025-07-06 20:36:24.454425 | orchestrator | # Status of Elasticsearch 2025-07-06 20:36:24.454442 | orchestrator | 2025-07-06 20:36:24.454480 | orchestrator | + pushd /opt/configuration/contrib 2025-07-06 20:36:24.454493 | orchestrator | + echo 2025-07-06 20:36:24.454558 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-06 20:36:24.454571 | orchestrator | + echo 2025-07-06 20:36:24.454582 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-06 20:36:24.636297 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-06 20:36:24.636839 | orchestrator | 2025-07-06 20:36:24.636875 | orchestrator | # Status of MariaDB 2025-07-06 20:36:24.636888 | orchestrator | 2025-07-06 20:36:24.636900 | orchestrator | + echo 2025-07-06 20:36:24.636912 | orchestrator | + echo '# Status of MariaDB' 2025-07-06 20:36:24.636923 | orchestrator | + echo 2025-07-06 20:36:24.636934 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-06 20:36:24.636945 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-06 20:36:24.714805 | orchestrator | Reading package lists... 2025-07-06 20:36:25.070608 | orchestrator | Building dependency tree... 2025-07-06 20:36:25.071285 | orchestrator | Reading state information... 2025-07-06 20:36:25.479269 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-06 20:36:25.479370 | orchestrator | bc set to manually installed. 2025-07-06 20:36:25.479385 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-06 20:36:26.168101 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-06 20:36:26.169022 | orchestrator | 2025-07-06 20:36:26.169055 | orchestrator | # Status of Prometheus 2025-07-06 20:36:26.169067 | orchestrator | 2025-07-06 20:36:26.169078 | orchestrator | + echo 2025-07-06 20:36:26.169089 | orchestrator | + echo '# Status of Prometheus' 2025-07-06 20:36:26.169099 | orchestrator | + echo 2025-07-06 20:36:26.169110 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-06 20:36:26.238933 | orchestrator | Unauthorized 2025-07-06 20:36:26.243251 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-06 20:36:26.314202 | orchestrator | Unauthorized 2025-07-06 20:36:26.317725 | orchestrator | 2025-07-06 20:36:26.317801 | orchestrator | # Status of RabbitMQ 2025-07-06 20:36:26.317816 | orchestrator | 2025-07-06 20:36:26.317828 | orchestrator | + echo 2025-07-06 20:36:26.317839 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-06 20:36:26.317850 | orchestrator | + echo 2025-07-06 20:36:26.317862 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-06 20:36:26.774686 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-06 20:36:26.787875 | orchestrator | 2025-07-06 20:36:26.787963 | orchestrator | # Status of Redis 2025-07-06 20:36:26.787985 | orchestrator | 2025-07-06 20:36:26.788004 | orchestrator | + echo 2025-07-06 20:36:26.788024 | orchestrator | + echo '# Status of Redis' 2025-07-06 20:36:26.788045 | orchestrator | + echo 2025-07-06 20:36:26.788066 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-06 20:36:26.793942 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002064s;;;0.000000;10.000000 2025-07-06 20:36:26.794319 | orchestrator | + popd 2025-07-06 20:36:26.794349 | orchestrator | 2025-07-06 20:36:26.794363 | orchestrator | # Create backup of MariaDB database 2025-07-06 20:36:26.794376 | orchestrator | 2025-07-06 20:36:26.794395 | orchestrator | + echo 2025-07-06 20:36:26.794414 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-06 20:36:26.794431 | orchestrator | + echo 2025-07-06 20:36:26.794450 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-06 20:36:28.530705 | orchestrator | 2025-07-06 20:36:28 | INFO  | Task 299a0003-eb82-4520-a776-088bf5dfc1ad (mariadb_backup) was prepared for execution. 2025-07-06 20:36:28.530780 | orchestrator | 2025-07-06 20:36:28 | INFO  | It takes a moment until task 299a0003-eb82-4520-a776-088bf5dfc1ad (mariadb_backup) has been started and output is visible here. 2025-07-06 20:36:32.467339 | orchestrator | 2025-07-06 20:36:32.468003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:36:32.468484 | orchestrator | 2025-07-06 20:36:32.468889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:36:32.469493 | orchestrator | Sunday 06 July 2025 20:36:32 +0000 (0:00:00.188) 0:00:00.188 *********** 2025-07-06 20:36:32.656404 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:36:32.804468 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:36:32.805174 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:36:32.805543 | orchestrator | 2025-07-06 20:36:32.806091 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:36:32.806868 | orchestrator | Sunday 06 July 2025 20:36:32 +0000 (0:00:00.339) 0:00:00.527 *********** 2025-07-06 20:36:33.399214 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-06 20:36:33.400699 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-06 20:36:33.402885 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-06 20:36:33.403432 | orchestrator | 2025-07-06 20:36:33.404000 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-06 20:36:33.404251 | orchestrator | 2025-07-06 20:36:33.404263 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-06 20:36:33.404571 | orchestrator | Sunday 06 July 2025 20:36:33 +0000 (0:00:00.595) 0:00:01.123 *********** 2025-07-06 20:36:33.813182 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:36:33.813290 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-06 20:36:33.813311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-06 20:36:33.815584 | orchestrator | 2025-07-06 20:36:33.817382 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:36:33.817410 | orchestrator | Sunday 06 July 2025 20:36:33 +0000 (0:00:00.407) 0:00:01.531 *********** 2025-07-06 20:36:34.402538 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:36:34.404645 | orchestrator | 2025-07-06 20:36:34.405348 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-06 20:36:34.405994 | orchestrator | Sunday 06 July 2025 20:36:34 +0000 (0:00:00.594) 0:00:02.125 *********** 2025-07-06 20:36:37.513790 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:36:37.514115 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:36:37.516774 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:36:37.516814 | orchestrator | 2025-07-06 20:36:37.517001 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-06 20:36:37.518256 | orchestrator | Sunday 06 July 2025 20:36:37 +0000 (0:00:03.107) 0:00:05.232 *********** 2025-07-06 20:37:12.940656 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-06 20:37:12.940778 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-06 20:37:12.940797 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-06 20:37:12.941389 | orchestrator | mariadb_bootstrap_restart 2025-07-06 20:37:13.058785 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:37:13.058874 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:37:13.059288 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:37:13.059777 | orchestrator | 2025-07-06 20:37:13.060309 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-06 20:37:13.061083 | orchestrator | skipping: no hosts matched 2025-07-06 20:37:13.061723 | orchestrator | 2025-07-06 20:37:13.062336 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-06 20:37:13.065603 | orchestrator | skipping: no hosts matched 2025-07-06 20:37:13.065677 | orchestrator | 2025-07-06 20:37:13.065687 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-06 20:37:13.065694 | orchestrator | skipping: no hosts matched 2025-07-06 20:37:13.065700 | orchestrator | 2025-07-06 20:37:13.065706 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-06 20:37:13.065741 | orchestrator | 2025-07-06 20:37:13.066355 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-06 20:37:13.066690 | orchestrator | Sunday 06 July 2025 20:37:13 +0000 (0:00:35.549) 0:00:40.782 *********** 2025-07-06 20:37:13.239948 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:37:13.352758 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:37:13.354238 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:37:13.357591 | orchestrator | 2025-07-06 20:37:13.357668 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-06 20:37:13.357683 | orchestrator | Sunday 06 July 2025 20:37:13 +0000 (0:00:00.293) 0:00:41.076 *********** 2025-07-06 20:37:13.724271 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:37:13.769263 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:37:13.769863 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:37:13.770887 | orchestrator | 2025-07-06 20:37:13.773593 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:37:13.773645 | orchestrator | 2025-07-06 20:37:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 20:37:13.773659 | orchestrator | 2025-07-06 20:37:13 | INFO  | Please wait and do not abort execution. 2025-07-06 20:37:13.773673 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:37:13.774014 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:37:13.774818 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:37:13.775147 | orchestrator | 2025-07-06 20:37:13.775609 | orchestrator | 2025-07-06 20:37:13.776955 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:37:13.777355 | orchestrator | Sunday 06 July 2025 20:37:13 +0000 (0:00:00.414) 0:00:41.490 *********** 2025-07-06 20:37:13.778372 | orchestrator | =============================================================================== 2025-07-06 20:37:13.779229 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 35.55s 2025-07-06 20:37:13.780620 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.11s 2025-07-06 20:37:13.781107 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-07-06 20:37:13.782130 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.59s 2025-07-06 20:37:13.782626 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2025-07-06 20:37:13.783195 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-07-06 20:37:13.783732 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-07-06 20:37:13.784574 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-07-06 20:37:14.321704 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-06 20:37:14.327023 | orchestrator | + set -e 2025-07-06 20:37:14.327079 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 20:37:14.327093 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 20:37:14.327105 | orchestrator | ++ INTERACTIVE=false 2025-07-06 20:37:14.327116 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 20:37:14.327126 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 20:37:14.327137 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-06 20:37:14.327854 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-06 20:37:14.330650 | orchestrator | 2025-07-06 20:37:14.330693 | orchestrator | # OpenStack endpoints 2025-07-06 20:37:14.330712 | orchestrator | 2025-07-06 20:37:14.330731 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-07-06 20:37:14.330750 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-07-06 20:37:14.330768 | orchestrator | + export OS_CLOUD=admin 2025-07-06 20:37:14.330780 | orchestrator | + OS_CLOUD=admin 2025-07-06 20:37:14.330818 | orchestrator | + echo 2025-07-06 20:37:14.330830 | orchestrator | + echo '# OpenStack endpoints' 2025-07-06 20:37:14.330841 | orchestrator | + echo 2025-07-06 20:37:14.330852 | orchestrator | + openstack endpoint list 2025-07-06 20:37:17.748344 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-06 20:37:17.748472 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-06 20:37:17.748494 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-06 20:37:17.748510 | orchestrator | | 07df9f165ef646e4aa12da7da3e6ef73 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-06 20:37:17.748527 | orchestrator | | 14389eaa16e44f21a593aba6faf44c45 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-06 20:37:17.748615 | orchestrator | | 2569f0b39c2f42c8a8f3394bf0f2cd56 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-06 20:37:17.748634 | orchestrator | | 52d1077ee89a4f5fb3cab0e4c99e9c27 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-06 20:37:17.748654 | orchestrator | | 59270a1db7aa4da28bba08103cecc20e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-06 20:37:17.748672 | orchestrator | | 5b7526f83a3d4ca492a9d42d15db39a1 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-06 20:37:17.748712 | orchestrator | | 69e0df64daef451e8bca36482362779f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-06 20:37:17.748725 | orchestrator | | 6b1e964894394ca08de08f06cc27fc1b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-06 20:37:17.748736 | orchestrator | | 6fe46eaab35747f2b76ccf4a26efe662 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-06 20:37:17.748747 | orchestrator | | 805942d3ec71451f841d2e7f654d6b85 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-06 20:37:17.748758 | orchestrator | | 84a6bc63d5dc492682b367b4d7119f78 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-06 20:37:17.748768 | orchestrator | | 8aac6c04832f407998fd643c3487140e | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-06 20:37:17.748779 | orchestrator | | 8bc08d9537b944979d6a83c7f10aaedf | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-06 20:37:17.748790 | orchestrator | | 95c06e1940d44fb980e8474047d28f67 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-06 20:37:17.748800 | orchestrator | | b87015f8f4d74d479eabfd59e4e99338 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-06 20:37:17.748811 | orchestrator | | bca70b97a62d4fb29c7408493220cf4f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-06 20:37:17.748822 | orchestrator | | beb7019f9d124d2f8c031cc5823a5db7 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-06 20:37:17.748855 | orchestrator | | bee2c32bd5b34e0aa94973998df84c4d | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-06 20:37:17.748867 | orchestrator | | c5dac677405f41a9875685775e5057ad | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-06 20:37:17.748881 | orchestrator | | ca4bb9bbbf824b36a65a12b4b9295432 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-06 20:37:17.748912 | orchestrator | | e5af974d34f042a986007b873073a89e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-06 20:37:17.748926 | orchestrator | | e708c813fb964ec69b942324954d7aca | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-06 20:37:17.748938 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-06 20:37:17.993256 | orchestrator | 2025-07-06 20:37:17.993376 | orchestrator | # Cinder 2025-07-06 20:37:17.993391 | orchestrator | 2025-07-06 20:37:17.993403 | orchestrator | + echo 2025-07-06 20:37:17.994345 | orchestrator | + echo '# Cinder' 2025-07-06 20:37:17.994378 | orchestrator | + echo 2025-07-06 20:37:17.994398 | orchestrator | + openstack volume service list 2025-07-06 20:37:20.611972 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-06 20:37:20.612082 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-06 20:37:20.612097 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-06 20:37:20.612110 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-06T20:37:14.000000 | 2025-07-06 20:37:20.612121 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-06T20:37:18.000000 | 2025-07-06 20:37:20.612132 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-06T20:37:18.000000 | 2025-07-06 20:37:20.612143 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-06T20:37:17.000000 | 2025-07-06 20:37:20.612154 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-06T20:37:17.000000 | 2025-07-06 20:37:20.612164 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-06T20:37:19.000000 | 2025-07-06 20:37:20.612175 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-06T20:37:10.000000 | 2025-07-06 20:37:20.612204 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-06T20:37:10.000000 | 2025-07-06 20:37:20.612216 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-06T20:37:11.000000 | 2025-07-06 20:37:20.612228 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-06 20:37:20.860977 | orchestrator | 2025-07-06 20:37:20.861068 | orchestrator | # Neutron 2025-07-06 20:37:20.861083 | orchestrator | 2025-07-06 20:37:20.861096 | orchestrator | + echo 2025-07-06 20:37:20.861108 | orchestrator | + echo '# Neutron' 2025-07-06 20:37:20.861121 | orchestrator | + echo 2025-07-06 20:37:20.861132 | orchestrator | + openstack network agent list 2025-07-06 20:37:23.567920 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-06 20:37:23.568044 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-06 20:37:23.568097 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-06 20:37:23.568111 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-06 20:37:23.568122 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-06 20:37:23.568133 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-06 20:37:23.568144 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-06 20:37:23.568154 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-06 20:37:23.568165 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-06 20:37:23.568176 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-06 20:37:23.568186 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-06 20:37:23.568197 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-06 20:37:23.568208 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-06 20:37:23.834858 | orchestrator | + openstack network service provider list 2025-07-06 20:37:26.317680 | orchestrator | +---------------+------+---------+ 2025-07-06 20:37:26.317788 | orchestrator | | Service Type | Name | Default | 2025-07-06 20:37:26.317812 | orchestrator | +---------------+------+---------+ 2025-07-06 20:37:26.317831 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-06 20:37:26.317850 | orchestrator | +---------------+------+---------+ 2025-07-06 20:37:26.587921 | orchestrator | 2025-07-06 20:37:26.587995 | orchestrator | # Nova 2025-07-06 20:37:26.588002 | orchestrator | 2025-07-06 20:37:26.588008 | orchestrator | + echo 2025-07-06 20:37:26.588013 | orchestrator | + echo '# Nova' 2025-07-06 20:37:26.588019 | orchestrator | + echo 2025-07-06 20:37:26.588025 | orchestrator | + openstack compute service list 2025-07-06 20:37:30.018196 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-06 20:37:30.018307 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-06 20:37:30.018323 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-06 20:37:30.018335 | orchestrator | | e7c733bb-4c42-45c0-988b-ada2a4fb5e80 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-06T20:37:26.000000 | 2025-07-06 20:37:30.018346 | orchestrator | | 8e7295b5-7190-4690-aade-5d45351ef257 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-06T20:37:27.000000 | 2025-07-06 20:37:30.018357 | orchestrator | | fff1100b-9951-421c-84f1-03c548cc30cf | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-06T20:37:29.000000 | 2025-07-06 20:37:30.018368 | orchestrator | | 20d5c135-fa4d-460a-af4a-010414fc86e6 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-06T20:37:28.000000 | 2025-07-06 20:37:30.018379 | orchestrator | | 7dbc6e52-a826-4739-9996-eaea14bcfaf5 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-06T20:37:29.000000 | 2025-07-06 20:37:30.018390 | orchestrator | | e6c7406f-2e6e-426e-9b9a-1a0e96802d3c | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-06T20:37:20.000000 | 2025-07-06 20:37:30.018429 | orchestrator | | 89019ae0-60a6-4eb5-b41b-00973c607e60 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-06T20:37:26.000000 | 2025-07-06 20:37:30.018474 | orchestrator | | c493b4ef-3cbc-4945-8bd9-fd771114a0a5 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-06T20:37:26.000000 | 2025-07-06 20:37:30.018497 | orchestrator | | 766b3b38-a34b-4c83-be90-54643a7b8079 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-06T20:37:27.000000 | 2025-07-06 20:37:30.018508 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-06 20:37:30.283058 | orchestrator | + openstack hypervisor list 2025-07-06 20:37:35.096425 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-06 20:37:35.096548 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-06 20:37:35.096660 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-06 20:37:35.096680 | orchestrator | | 4f1ff33a-7a22-4355-90ca-9a5787a8201b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-06 20:37:35.096699 | orchestrator | | e4e89690-9e82-4be8-9d3b-dbee67d1ea42 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-06 20:37:35.096718 | orchestrator | | efaf8afc-02fa-493b-84f1-43a36b6754f5 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-06 20:37:35.096737 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-06 20:37:35.388274 | orchestrator | 2025-07-06 20:37:35.388361 | orchestrator | # Run OpenStack test play 2025-07-06 20:37:35.388373 | orchestrator | 2025-07-06 20:37:35.388383 | orchestrator | + echo 2025-07-06 20:37:35.388393 | orchestrator | + echo '# Run OpenStack test play' 2025-07-06 20:37:35.388402 | orchestrator | + echo 2025-07-06 20:37:35.388412 | orchestrator | + osism apply --environment openstack test 2025-07-06 20:37:37.053003 | orchestrator | 2025-07-06 20:37:37 | INFO  | Trying to run play test in environment openstack 2025-07-06 20:37:37.057015 | orchestrator | Registering Redlock._acquired_script 2025-07-06 20:37:37.057074 | orchestrator | Registering Redlock._extend_script 2025-07-06 20:37:37.057086 | orchestrator | Registering Redlock._release_script 2025-07-06 20:37:37.116283 | orchestrator | 2025-07-06 20:37:37 | INFO  | Task e8a03dca-74f8-4929-828f-22c0d7eb0051 (test) was prepared for execution. 2025-07-06 20:37:37.116372 | orchestrator | 2025-07-06 20:37:37 | INFO  | It takes a moment until task e8a03dca-74f8-4929-828f-22c0d7eb0051 (test) has been started and output is visible here. 2025-07-06 20:37:41.008311 | orchestrator | 2025-07-06 20:37:41.010501 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-06 20:37:41.012674 | orchestrator | 2025-07-06 20:37:41.013308 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-06 20:37:41.013933 | orchestrator | Sunday 06 July 2025 20:37:40 +0000 (0:00:00.079) 0:00:00.079 *********** 2025-07-06 20:37:44.501544 | orchestrator | changed: [localhost] 2025-07-06 20:37:44.502869 | orchestrator | 2025-07-06 20:37:44.506955 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-06 20:37:44.507304 | orchestrator | Sunday 06 July 2025 20:37:44 +0000 (0:00:03.495) 0:00:03.574 *********** 2025-07-06 20:37:48.596048 | orchestrator | changed: [localhost] 2025-07-06 20:37:48.596874 | orchestrator | 2025-07-06 20:37:48.596931 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-06 20:37:48.597632 | orchestrator | Sunday 06 July 2025 20:37:48 +0000 (0:00:04.093) 0:00:07.668 *********** 2025-07-06 20:37:54.694465 | orchestrator | changed: [localhost] 2025-07-06 20:37:54.694673 | orchestrator | 2025-07-06 20:37:54.694705 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-06 20:37:54.695733 | orchestrator | Sunday 06 July 2025 20:37:54 +0000 (0:00:06.097) 0:00:13.765 *********** 2025-07-06 20:37:58.744046 | orchestrator | changed: [localhost] 2025-07-06 20:37:58.744315 | orchestrator | 2025-07-06 20:37:58.744379 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-06 20:37:58.746448 | orchestrator | Sunday 06 July 2025 20:37:58 +0000 (0:00:04.048) 0:00:17.814 *********** 2025-07-06 20:38:02.711010 | orchestrator | changed: [localhost] 2025-07-06 20:38:02.711272 | orchestrator | 2025-07-06 20:38:02.711330 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-06 20:38:02.711360 | orchestrator | Sunday 06 July 2025 20:38:02 +0000 (0:00:03.970) 0:00:21.784 *********** 2025-07-06 20:38:14.644125 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-06 20:38:14.644245 | orchestrator | changed: [localhost] => (item=member) 2025-07-06 20:38:14.644262 | orchestrator | changed: [localhost] => (item=creator) 2025-07-06 20:38:14.644273 | orchestrator | 2025-07-06 20:38:14.644286 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-06 20:38:14.644298 | orchestrator | Sunday 06 July 2025 20:38:14 +0000 (0:00:11.925) 0:00:33.710 *********** 2025-07-06 20:38:18.910012 | orchestrator | changed: [localhost] 2025-07-06 20:38:18.910951 | orchestrator | 2025-07-06 20:38:18.911160 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-06 20:38:18.912008 | orchestrator | Sunday 06 July 2025 20:38:18 +0000 (0:00:04.272) 0:00:37.982 *********** 2025-07-06 20:38:23.786961 | orchestrator | changed: [localhost] 2025-07-06 20:38:23.787075 | orchestrator | 2025-07-06 20:38:23.788051 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-06 20:38:23.788492 | orchestrator | Sunday 06 July 2025 20:38:23 +0000 (0:00:04.877) 0:00:42.859 *********** 2025-07-06 20:38:27.825335 | orchestrator | changed: [localhost] 2025-07-06 20:38:27.826085 | orchestrator | 2025-07-06 20:38:27.826700 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-06 20:38:27.827663 | orchestrator | Sunday 06 July 2025 20:38:27 +0000 (0:00:04.037) 0:00:46.896 *********** 2025-07-06 20:38:31.770408 | orchestrator | changed: [localhost] 2025-07-06 20:38:31.770523 | orchestrator | 2025-07-06 20:38:31.771735 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-06 20:38:31.772821 | orchestrator | Sunday 06 July 2025 20:38:31 +0000 (0:00:03.945) 0:00:50.841 *********** 2025-07-06 20:38:35.945509 | orchestrator | changed: [localhost] 2025-07-06 20:38:35.947056 | orchestrator | 2025-07-06 20:38:35.947138 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-06 20:38:35.950414 | orchestrator | Sunday 06 July 2025 20:38:35 +0000 (0:00:04.174) 0:00:55.016 *********** 2025-07-06 20:38:39.713194 | orchestrator | changed: [localhost] 2025-07-06 20:38:39.713548 | orchestrator | 2025-07-06 20:38:39.714989 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-06 20:38:39.716875 | orchestrator | Sunday 06 July 2025 20:38:39 +0000 (0:00:03.767) 0:00:58.784 *********** 2025-07-06 20:38:55.479427 | orchestrator | changed: [localhost] 2025-07-06 20:38:55.479536 | orchestrator | 2025-07-06 20:38:55.479552 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-06 20:38:55.479576 | orchestrator | Sunday 06 July 2025 20:38:55 +0000 (0:00:15.766) 0:01:14.551 *********** 2025-07-06 20:41:10.880357 | orchestrator | changed: [localhost] => (item=test) 2025-07-06 20:41:10.880478 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-06 20:41:10.880493 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-06 20:41:10.880505 | orchestrator | 2025-07-06 20:41:10.880518 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-06 20:41:40.880411 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-06 20:41:40.880532 | orchestrator | 2025-07-06 20:41:40.880550 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-06 20:42:04.210255 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-06 20:42:04.210379 | orchestrator | 2025-07-06 20:42:04.210397 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-06 20:42:04.210411 | orchestrator | Sunday 06 July 2025 20:42:04 +0000 (0:03:08.727) 0:04:23.278 *********** 2025-07-06 20:42:27.780812 | orchestrator | changed: [localhost] => (item=test) 2025-07-06 20:42:27.781039 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-06 20:42:27.781065 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-06 20:42:27.781936 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-06 20:42:27.784179 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-06 20:42:27.785936 | orchestrator | 2025-07-06 20:42:27.787180 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-06 20:42:27.788242 | orchestrator | Sunday 06 July 2025 20:42:27 +0000 (0:00:23.573) 0:04:46.852 *********** 2025-07-06 20:42:59.831630 | orchestrator | changed: [localhost] => (item=test) 2025-07-06 20:42:59.831751 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-06 20:42:59.831767 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-06 20:42:59.832567 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-06 20:42:59.834526 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-06 20:42:59.835038 | orchestrator | 2025-07-06 20:42:59.835884 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-06 20:42:59.836754 | orchestrator | Sunday 06 July 2025 20:42:59 +0000 (0:00:32.048) 0:05:18.900 *********** 2025-07-06 20:43:07.124589 | orchestrator | changed: [localhost] 2025-07-06 20:43:07.124717 | orchestrator | 2025-07-06 20:43:07.126549 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-06 20:43:07.126946 | orchestrator | Sunday 06 July 2025 20:43:07 +0000 (0:00:07.291) 0:05:26.192 *********** 2025-07-06 20:43:20.757488 | orchestrator | changed: [localhost] 2025-07-06 20:43:20.757608 | orchestrator | 2025-07-06 20:43:20.757625 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-06 20:43:20.757638 | orchestrator | Sunday 06 July 2025 20:43:20 +0000 (0:00:13.634) 0:05:39.826 *********** 2025-07-06 20:43:25.842818 | orchestrator | ok: [localhost] 2025-07-06 20:43:25.843404 | orchestrator | 2025-07-06 20:43:25.844049 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-06 20:43:25.844568 | orchestrator | Sunday 06 July 2025 20:43:25 +0000 (0:00:05.088) 0:05:44.915 *********** 2025-07-06 20:43:25.892014 | orchestrator | ok: [localhost] => { 2025-07-06 20:43:25.892859 | orchestrator |  "msg": "192.168.112.147" 2025-07-06 20:43:25.893793 | orchestrator | } 2025-07-06 20:43:25.895034 | orchestrator | 2025-07-06 20:43:25.895838 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:43:25.896018 | orchestrator | 2025-07-06 20:43:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-07-06 20:43:25.896385 | orchestrator | 2025-07-06 20:43:25 | INFO  | Please wait and do not abort execution. 2025-07-06 20:43:25.897887 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:43:25.898097 | orchestrator | 2025-07-06 20:43:25.899020 | orchestrator | 2025-07-06 20:43:25.899800 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:43:25.900103 | orchestrator | Sunday 06 July 2025 20:43:25 +0000 (0:00:00.048) 0:05:44.964 *********** 2025-07-06 20:43:25.901556 | orchestrator | =============================================================================== 2025-07-06 20:43:25.902296 | orchestrator | Create test instances ------------------------------------------------- 188.73s 2025-07-06 20:43:25.903002 | orchestrator | Add tag to instances --------------------------------------------------- 32.05s 2025-07-06 20:43:25.903629 | orchestrator | Add metadata to instances ---------------------------------------------- 23.57s 2025-07-06 20:43:25.904346 | orchestrator | Create test network topology ------------------------------------------- 15.77s 2025-07-06 20:43:25.904832 | orchestrator | Attach test volume ----------------------------------------------------- 13.63s 2025-07-06 20:43:25.905721 | orchestrator | Add member roles to user test ------------------------------------------ 11.93s 2025-07-06 20:43:25.906147 | orchestrator | Create test volume ------------------------------------------------------ 7.29s 2025-07-06 20:43:25.906645 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.10s 2025-07-06 20:43:25.907385 | orchestrator | Create floating ip address ---------------------------------------------- 5.09s 2025-07-06 20:43:25.907859 | orchestrator | Create ssh security group ----------------------------------------------- 4.88s 2025-07-06 20:43:25.908572 | orchestrator | Create test server group ------------------------------------------------ 4.27s 2025-07-06 20:43:25.908858 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.17s 2025-07-06 20:43:25.909884 | orchestrator | Create test-admin user -------------------------------------------------- 4.09s 2025-07-06 20:43:25.910121 | orchestrator | Create test project ----------------------------------------------------- 4.05s 2025-07-06 20:43:25.910466 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.04s 2025-07-06 20:43:25.910840 | orchestrator | Create test user -------------------------------------------------------- 3.97s 2025-07-06 20:43:25.911600 | orchestrator | Create icmp security group ---------------------------------------------- 3.95s 2025-07-06 20:43:25.911784 | orchestrator | Create test keypair ----------------------------------------------------- 3.77s 2025-07-06 20:43:25.912435 | orchestrator | Create test domain ------------------------------------------------------ 3.50s 2025-07-06 20:43:25.912754 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-07-06 20:43:26.368760 | orchestrator | + server_list 2025-07-06 20:43:26.368863 | orchestrator | + openstack --os-cloud test server list 2025-07-06 20:43:30.284597 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-06 20:43:30.284731 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-06 20:43:30.284758 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-06 20:43:30.284779 | orchestrator | | 7db630c1-ad11-4e12-97a1-3d690012fe6e | test-4 | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.112 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:43:30.284791 | orchestrator | | c286ef7c-cfed-4e84-9121-5433afef662a | test-3 | ACTIVE | auto_allocated_network=10.42.0.3, 192.168.112.101 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:43:30.284802 | orchestrator | | 6bf02c0c-e53e-4e4c-8b3f-b3bbe0eb2326 | test-2 | ACTIVE | auto_allocated_network=10.42.0.56, 192.168.112.167 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:43:30.284813 | orchestrator | | 3de9f417-ac8c-4f3a-879a-194c5206872a | test-1 | ACTIVE | auto_allocated_network=10.42.0.49, 192.168.112.143 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:43:30.284824 | orchestrator | | 7090322d-8d83-406a-9258-1a2b1ce55b00 | test | ACTIVE | auto_allocated_network=10.42.0.41, 192.168.112.147 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:43:30.284835 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-06 20:43:30.599508 | orchestrator | + openstack --os-cloud test server show test 2025-07-06 20:43:34.042263 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:34.042356 | orchestrator | | Field | Value | 2025-07-06 20:43:34.042368 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:34.042441 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:43:34.042451 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:43:34.042465 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:43:34.042474 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-06 20:43:34.042482 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:43:34.042490 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:43:34.042498 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:43:34.042506 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:43:34.042529 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:43:34.042537 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:43:34.042545 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:43:34.042560 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:43:34.042568 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:43:34.042576 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:43:34.042588 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:43:34.042596 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:39:25.000000 | 2025-07-06 20:43:34.042604 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:43:34.042612 | orchestrator | | accessIPv4 | | 2025-07-06 20:43:34.042620 | orchestrator | | accessIPv6 | | 2025-07-06 20:43:34.042628 | orchestrator | | addresses | auto_allocated_network=10.42.0.41, 192.168.112.147 | 2025-07-06 20:43:34.042641 | orchestrator | | config_drive | | 2025-07-06 20:43:34.042658 | orchestrator | | created | 2025-07-06T20:39:03Z | 2025-07-06 20:43:34.042666 | orchestrator | | description | None | 2025-07-06 20:43:34.042674 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:43:34.042682 | orchestrator | | hostId | 3fa36a95f6793944efc671b455f82ffaf4d3b8e593224cde32d814de | 2025-07-06 20:43:34.042693 | orchestrator | | host_status | None | 2025-07-06 20:43:34.042701 | orchestrator | | id | 7090322d-8d83-406a-9258-1a2b1ce55b00 | 2025-07-06 20:43:34.042709 | orchestrator | | image | Cirros 0.6.2 (1768479b-f36d-4dba-b4c2-71438ee9ba83) | 2025-07-06 20:43:34.042717 | orchestrator | | key_name | test | 2025-07-06 20:43:34.042725 | orchestrator | | locked | False | 2025-07-06 20:43:34.042733 | orchestrator | | locked_reason | None | 2025-07-06 20:43:34.042741 | orchestrator | | name | test | 2025-07-06 20:43:34.042758 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:43:34.042766 | orchestrator | | progress | 0 | 2025-07-06 20:43:34.042774 | orchestrator | | project_id | 23cd8784addc4594b20e4bb44b5c6cb4 | 2025-07-06 20:43:34.042782 | orchestrator | | properties | hostname='test' | 2025-07-06 20:43:34.042792 | orchestrator | | security_groups | name='icmp' | 2025-07-06 20:43:34.042802 | orchestrator | | | name='ssh' | 2025-07-06 20:43:34.042813 | orchestrator | | server_groups | None | 2025-07-06 20:43:34.042822 | orchestrator | | status | ACTIVE | 2025-07-06 20:43:34.042831 | orchestrator | | tags | test | 2025-07-06 20:43:34.042841 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:43:34.042851 | orchestrator | | updated | 2025-07-06T20:42:09Z | 2025-07-06 20:43:34.042869 | orchestrator | | user_id | 1f4987897a734d97868233009d91e5df | 2025-07-06 20:43:34.042879 | orchestrator | | volumes_attached | delete_on_termination='False', id='bcbc1b30-9244-4099-8c9e-5b978b7cfb0e' | 2025-07-06 20:43:34.045861 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:34.308194 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-06 20:43:37.606514 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:37.606635 | orchestrator | | Field | Value | 2025-07-06 20:43:37.606652 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:37.606675 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:43:37.606688 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:43:37.606699 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:43:37.606710 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-06 20:43:37.606743 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:43:37.606755 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:43:37.606766 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:43:37.606777 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:43:37.606832 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:43:37.606846 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:43:37.606858 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:43:37.606876 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:43:37.606888 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:43:37.606899 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:43:37.606911 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:43:37.606930 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:40:06.000000 | 2025-07-06 20:43:37.606999 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:43:37.607011 | orchestrator | | accessIPv4 | | 2025-07-06 20:43:37.607023 | orchestrator | | accessIPv6 | | 2025-07-06 20:43:37.607035 | orchestrator | | addresses | auto_allocated_network=10.42.0.49, 192.168.112.143 | 2025-07-06 20:43:37.607057 | orchestrator | | config_drive | | 2025-07-06 20:43:37.607070 | orchestrator | | created | 2025-07-06T20:39:46Z | 2025-07-06 20:43:37.607082 | orchestrator | | description | None | 2025-07-06 20:43:37.607100 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:43:37.607113 | orchestrator | | hostId | 12eacd7136e5c0036e6fc0cb78fd44a7f172dcae4e151ca11ea70d22 | 2025-07-06 20:43:37.607126 | orchestrator | | host_status | None | 2025-07-06 20:43:37.607146 | orchestrator | | id | 3de9f417-ac8c-4f3a-879a-194c5206872a | 2025-07-06 20:43:37.607159 | orchestrator | | image | Cirros 0.6.2 (1768479b-f36d-4dba-b4c2-71438ee9ba83) | 2025-07-06 20:43:37.607172 | orchestrator | | key_name | test | 2025-07-06 20:43:37.607185 | orchestrator | | locked | False | 2025-07-06 20:43:37.607197 | orchestrator | | locked_reason | None | 2025-07-06 20:43:37.607209 | orchestrator | | name | test-1 | 2025-07-06 20:43:37.607226 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:43:37.607238 | orchestrator | | progress | 0 | 2025-07-06 20:43:37.607249 | orchestrator | | project_id | 23cd8784addc4594b20e4bb44b5c6cb4 | 2025-07-06 20:43:37.607264 | orchestrator | | properties | hostname='test-1' | 2025-07-06 20:43:37.607276 | orchestrator | | security_groups | name='icmp' | 2025-07-06 20:43:37.607300 | orchestrator | | | name='ssh' | 2025-07-06 20:43:37.607312 | orchestrator | | server_groups | None | 2025-07-06 20:43:37.607323 | orchestrator | | status | ACTIVE | 2025-07-06 20:43:37.607334 | orchestrator | | tags | test | 2025-07-06 20:43:37.607345 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:43:37.607356 | orchestrator | | updated | 2025-07-06T20:42:13Z | 2025-07-06 20:43:37.607372 | orchestrator | | user_id | 1f4987897a734d97868233009d91e5df | 2025-07-06 20:43:37.607383 | orchestrator | | volumes_attached | | 2025-07-06 20:43:37.610663 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:37.895736 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-06 20:43:40.935566 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:40.935694 | orchestrator | | Field | Value | 2025-07-06 20:43:40.935710 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:40.935722 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:43:40.935733 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:43:40.935744 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:43:40.935755 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-06 20:43:40.935766 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:43:40.935777 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:43:40.935788 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:43:40.935799 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:43:40.935826 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:43:40.935845 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:43:40.935857 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:43:40.935868 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:43:40.935879 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:43:40.935889 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:43:40.935900 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:43:40.935921 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:40:47.000000 | 2025-07-06 20:43:40.935932 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:43:40.936023 | orchestrator | | accessIPv4 | | 2025-07-06 20:43:40.936036 | orchestrator | | accessIPv6 | | 2025-07-06 20:43:40.936047 | orchestrator | | addresses | auto_allocated_network=10.42.0.56, 192.168.112.167 | 2025-07-06 20:43:40.936075 | orchestrator | | config_drive | | 2025-07-06 20:43:40.936095 | orchestrator | | created | 2025-07-06T20:40:25Z | 2025-07-06 20:43:40.936109 | orchestrator | | description | None | 2025-07-06 20:43:40.936122 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:43:40.936135 | orchestrator | | hostId | 0d3218a02c94093fde71d59e90bd96eb115e233822f635a7688f0810 | 2025-07-06 20:43:40.936148 | orchestrator | | host_status | None | 2025-07-06 20:43:40.936161 | orchestrator | | id | 6bf02c0c-e53e-4e4c-8b3f-b3bbe0eb2326 | 2025-07-06 20:43:40.936173 | orchestrator | | image | Cirros 0.6.2 (1768479b-f36d-4dba-b4c2-71438ee9ba83) | 2025-07-06 20:43:40.936186 | orchestrator | | key_name | test | 2025-07-06 20:43:40.936199 | orchestrator | | locked | False | 2025-07-06 20:43:40.936210 | orchestrator | | locked_reason | None | 2025-07-06 20:43:40.936228 | orchestrator | | name | test-2 | 2025-07-06 20:43:40.936250 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:43:40.936262 | orchestrator | | progress | 0 | 2025-07-06 20:43:40.936273 | orchestrator | | project_id | 23cd8784addc4594b20e4bb44b5c6cb4 | 2025-07-06 20:43:40.936284 | orchestrator | | properties | hostname='test-2' | 2025-07-06 20:43:40.936295 | orchestrator | | security_groups | name='icmp' | 2025-07-06 20:43:40.936306 | orchestrator | | | name='ssh' | 2025-07-06 20:43:40.936317 | orchestrator | | server_groups | None | 2025-07-06 20:43:40.936328 | orchestrator | | status | ACTIVE | 2025-07-06 20:43:40.936339 | orchestrator | | tags | test | 2025-07-06 20:43:40.936350 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:43:40.936367 | orchestrator | | updated | 2025-07-06T20:42:18Z | 2025-07-06 20:43:40.936384 | orchestrator | | user_id | 1f4987897a734d97868233009d91e5df | 2025-07-06 20:43:40.936400 | orchestrator | | volumes_attached | | 2025-07-06 20:43:40.940613 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:41.195185 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-06 20:43:44.437494 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:44.437608 | orchestrator | | Field | Value | 2025-07-06 20:43:44.437625 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:44.437637 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:43:44.437648 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:43:44.437659 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:43:44.437694 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-06 20:43:44.437705 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:43:44.437717 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:43:44.437742 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:43:44.437754 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:43:44.437783 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:43:44.437795 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:43:44.437806 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:43:44.437817 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:43:44.437828 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:43:44.437839 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:43:44.437858 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:43:44.437869 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:41:19.000000 | 2025-07-06 20:43:44.437880 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:43:44.437892 | orchestrator | | accessIPv4 | | 2025-07-06 20:43:44.437908 | orchestrator | | accessIPv6 | | 2025-07-06 20:43:44.437936 | orchestrator | | addresses | auto_allocated_network=10.42.0.3, 192.168.112.101 | 2025-07-06 20:43:44.438008 | orchestrator | | config_drive | | 2025-07-06 20:43:44.438066 | orchestrator | | created | 2025-07-06T20:41:03Z | 2025-07-06 20:43:44.438079 | orchestrator | | description | None | 2025-07-06 20:43:44.438092 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:43:44.438105 | orchestrator | | hostId | 3fa36a95f6793944efc671b455f82ffaf4d3b8e593224cde32d814de | 2025-07-06 20:43:44.438134 | orchestrator | | host_status | None | 2025-07-06 20:43:44.438147 | orchestrator | | id | c286ef7c-cfed-4e84-9121-5433afef662a | 2025-07-06 20:43:44.438160 | orchestrator | | image | Cirros 0.6.2 (1768479b-f36d-4dba-b4c2-71438ee9ba83) | 2025-07-06 20:43:44.438172 | orchestrator | | key_name | test | 2025-07-06 20:43:44.438185 | orchestrator | | locked | False | 2025-07-06 20:43:44.438198 | orchestrator | | locked_reason | None | 2025-07-06 20:43:44.438212 | orchestrator | | name | test-3 | 2025-07-06 20:43:44.438233 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:43:44.438246 | orchestrator | | progress | 0 | 2025-07-06 20:43:44.438259 | orchestrator | | project_id | 23cd8784addc4594b20e4bb44b5c6cb4 | 2025-07-06 20:43:44.438279 | orchestrator | | properties | hostname='test-3' | 2025-07-06 20:43:44.438293 | orchestrator | | security_groups | name='icmp' | 2025-07-06 20:43:44.438305 | orchestrator | | | name='ssh' | 2025-07-06 20:43:44.438317 | orchestrator | | server_groups | None | 2025-07-06 20:43:44.438329 | orchestrator | | status | ACTIVE | 2025-07-06 20:43:44.438342 | orchestrator | | tags | test | 2025-07-06 20:43:44.438367 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:43:44.438380 | orchestrator | | updated | 2025-07-06T20:42:23Z | 2025-07-06 20:43:44.438399 | orchestrator | | user_id | 1f4987897a734d97868233009d91e5df | 2025-07-06 20:43:44.438410 | orchestrator | | volumes_attached | | 2025-07-06 20:43:44.442480 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:44.710796 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-06 20:43:47.744683 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:47.744817 | orchestrator | | Field | Value | 2025-07-06 20:43:47.744839 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:47.744857 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:43:47.744874 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:43:47.744891 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:43:47.744907 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-06 20:43:47.744943 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:43:47.744987 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:43:47.745005 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:43:47.745022 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:43:47.745082 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:43:47.745102 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:43:47.745118 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:43:47.745134 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:43:47.745151 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:43:47.745167 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:43:47.745183 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:43:47.745201 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:41:52.000000 | 2025-07-06 20:43:47.745224 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:43:47.745241 | orchestrator | | accessIPv4 | | 2025-07-06 20:43:47.745257 | orchestrator | | accessIPv6 | | 2025-07-06 20:43:47.745283 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.112 | 2025-07-06 20:43:47.745307 | orchestrator | | config_drive | | 2025-07-06 20:43:47.745325 | orchestrator | | created | 2025-07-06T20:41:37Z | 2025-07-06 20:43:47.745342 | orchestrator | | description | None | 2025-07-06 20:43:47.745359 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:43:47.745375 | orchestrator | | hostId | 12eacd7136e5c0036e6fc0cb78fd44a7f172dcae4e151ca11ea70d22 | 2025-07-06 20:43:47.745391 | orchestrator | | host_status | None | 2025-07-06 20:43:47.745407 | orchestrator | | id | 7db630c1-ad11-4e12-97a1-3d690012fe6e | 2025-07-06 20:43:47.745429 | orchestrator | | image | Cirros 0.6.2 (1768479b-f36d-4dba-b4c2-71438ee9ba83) | 2025-07-06 20:43:47.745447 | orchestrator | | key_name | test | 2025-07-06 20:43:47.745476 | orchestrator | | locked | False | 2025-07-06 20:43:47.745494 | orchestrator | | locked_reason | None | 2025-07-06 20:43:47.745511 | orchestrator | | name | test-4 | 2025-07-06 20:43:47.745537 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:43:47.745554 | orchestrator | | progress | 0 | 2025-07-06 20:43:47.745565 | orchestrator | | project_id | 23cd8784addc4594b20e4bb44b5c6cb4 | 2025-07-06 20:43:47.745575 | orchestrator | | properties | hostname='test-4' | 2025-07-06 20:43:47.745585 | orchestrator | | security_groups | name='icmp' | 2025-07-06 20:43:47.745595 | orchestrator | | | name='ssh' | 2025-07-06 20:43:47.745605 | orchestrator | | server_groups | None | 2025-07-06 20:43:47.745620 | orchestrator | | status | ACTIVE | 2025-07-06 20:43:47.745637 | orchestrator | | tags | test | 2025-07-06 20:43:47.745647 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:43:47.745657 | orchestrator | | updated | 2025-07-06T20:42:27Z | 2025-07-06 20:43:47.745672 | orchestrator | | user_id | 1f4987897a734d97868233009d91e5df | 2025-07-06 20:43:47.745682 | orchestrator | | volumes_attached | | 2025-07-06 20:43:47.749633 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:43:48.032066 | orchestrator | + server_ping 2025-07-06 20:43:48.033427 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-06 20:43:48.033467 | orchestrator | ++ tr -d '\r' 2025-07-06 20:43:50.882808 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:43:50.882914 | orchestrator | + ping -c3 192.168.112.112 2025-07-06 20:43:50.897786 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-07-06 20:43:50.897833 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=10.4 ms 2025-07-06 20:43:51.892432 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=3.12 ms 2025-07-06 20:43:52.892384 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.77 ms 2025-07-06 20:43:52.892493 | orchestrator | 2025-07-06 20:43:52.892510 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-07-06 20:43:52.892523 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-06 20:43:52.892534 | orchestrator | rtt min/avg/max/mdev = 1.774/5.106/10.422/3.799 ms 2025-07-06 20:43:52.892686 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:43:52.892706 | orchestrator | + ping -c3 192.168.112.143 2025-07-06 20:43:52.906656 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2025-07-06 20:43:52.906722 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=9.57 ms 2025-07-06 20:43:53.902104 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.88 ms 2025-07-06 20:43:54.903787 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=2.54 ms 2025-07-06 20:43:54.903903 | orchestrator | 2025-07-06 20:43:54.903919 | orchestrator | --- 192.168.112.143 ping statistics --- 2025-07-06 20:43:54.904000 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:43:54.904013 | orchestrator | rtt min/avg/max/mdev = 2.536/4.995/9.568/3.236 ms 2025-07-06 20:43:54.904404 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:43:54.904431 | orchestrator | + ping -c3 192.168.112.101 2025-07-06 20:43:54.919690 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2025-07-06 20:43:54.919781 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=9.94 ms 2025-07-06 20:43:55.914103 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=3.03 ms 2025-07-06 20:43:56.914410 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=2.35 ms 2025-07-06 20:43:56.914542 | orchestrator | 2025-07-06 20:43:56.914559 | orchestrator | --- 192.168.112.101 ping statistics --- 2025-07-06 20:43:56.914572 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-06 20:43:56.914583 | orchestrator | rtt min/avg/max/mdev = 2.349/5.105/9.935/3.426 ms 2025-07-06 20:43:56.914775 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:43:56.914804 | orchestrator | + ping -c3 192.168.112.147 2025-07-06 20:43:56.927194 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-07-06 20:43:56.927283 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=7.73 ms 2025-07-06 20:43:57.924330 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.71 ms 2025-07-06 20:43:58.925658 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.03 ms 2025-07-06 20:43:58.925752 | orchestrator | 2025-07-06 20:43:58.925766 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-07-06 20:43:58.925776 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:43:58.925785 | orchestrator | rtt min/avg/max/mdev = 2.031/4.156/7.729/2.541 ms 2025-07-06 20:43:58.926288 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:43:58.926310 | orchestrator | + ping -c3 192.168.112.167 2025-07-06 20:43:58.938864 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-07-06 20:43:58.938928 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=8.00 ms 2025-07-06 20:43:59.935069 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.76 ms 2025-07-06 20:44:00.936828 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.20 ms 2025-07-06 20:44:00.936936 | orchestrator | 2025-07-06 20:44:00.936950 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-07-06 20:44:00.937000 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:44:00.937012 | orchestrator | rtt min/avg/max/mdev = 2.197/4.320/8.004/2.614 ms 2025-07-06 20:44:00.937510 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-07-06 20:44:01.249351 | orchestrator | ok: Runtime: 0:09:56.457522 2025-07-06 20:44:01.296190 | 2025-07-06 20:44:01.296361 | TASK [Run tempest] 2025-07-06 20:44:01.841376 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:01.857636 | 2025-07-06 20:44:01.857824 | TASK [Check prometheus alert status] 2025-07-06 20:44:02.394465 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:02.397582 | 2025-07-06 20:44:02.397761 | PLAY RECAP 2025-07-06 20:44:02.397924 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-06 20:44:02.397995 | 2025-07-06 20:44:02.686987 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-06 20:44:02.690563 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-06 20:44:04.406982 | 2025-07-06 20:44:04.407160 | PLAY [Post output play] 2025-07-06 20:44:04.435692 | 2025-07-06 20:44:04.435851 | LOOP [stage-output : Register sources] 2025-07-06 20:44:04.519742 | 2025-07-06 20:44:04.520060 | TASK [stage-output : Check sudo] 2025-07-06 20:44:05.419758 | orchestrator | sudo: a password is required 2025-07-06 20:44:05.563567 | orchestrator | ok: Runtime: 0:00:00.010589 2025-07-06 20:44:05.579124 | 2025-07-06 20:44:05.579333 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-06 20:44:05.633619 | 2025-07-06 20:44:05.633921 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-06 20:44:05.702041 | orchestrator | ok 2025-07-06 20:44:05.709677 | 2025-07-06 20:44:05.709787 | LOOP [stage-output : Ensure target folders exist] 2025-07-06 20:44:06.181627 | orchestrator | ok: "docs" 2025-07-06 20:44:06.182057 | 2025-07-06 20:44:06.451022 | orchestrator | ok: "artifacts" 2025-07-06 20:44:06.680095 | orchestrator | ok: "logs" 2025-07-06 20:44:06.700147 | 2025-07-06 20:44:06.700339 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-06 20:44:06.734514 | 2025-07-06 20:44:06.734774 | TASK [stage-output : Make all log files readable] 2025-07-06 20:44:07.026895 | orchestrator | ok 2025-07-06 20:44:07.036705 | 2025-07-06 20:44:07.036854 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-06 20:44:07.092806 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:07.110801 | 2025-07-06 20:44:07.111020 | TASK [stage-output : Discover log files for compression] 2025-07-06 20:44:07.137050 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:07.145690 | 2025-07-06 20:44:07.145812 | LOOP [stage-output : Archive everything from logs] 2025-07-06 20:44:07.195576 | 2025-07-06 20:44:07.195751 | PLAY [Post cleanup play] 2025-07-06 20:44:07.204172 | 2025-07-06 20:44:07.204312 | TASK [Set cloud fact (Zuul deployment)] 2025-07-06 20:44:07.263001 | orchestrator | ok 2025-07-06 20:44:07.275852 | 2025-07-06 20:44:07.275985 | TASK [Set cloud fact (local deployment)] 2025-07-06 20:44:07.310976 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:07.327641 | 2025-07-06 20:44:07.327808 | TASK [Clean the cloud environment] 2025-07-06 20:44:07.934740 | orchestrator | 2025-07-06 20:44:07 - clean up servers 2025-07-06 20:44:08.714869 | orchestrator | 2025-07-06 20:44:08 - testbed-manager 2025-07-06 20:44:08.801398 | orchestrator | 2025-07-06 20:44:08 - testbed-node-4 2025-07-06 20:44:08.886854 | orchestrator | 2025-07-06 20:44:08 - testbed-node-0 2025-07-06 20:44:08.971465 | orchestrator | 2025-07-06 20:44:08 - testbed-node-3 2025-07-06 20:44:09.061050 | orchestrator | 2025-07-06 20:44:09 - testbed-node-5 2025-07-06 20:44:09.153566 | orchestrator | 2025-07-06 20:44:09 - testbed-node-2 2025-07-06 20:44:09.240037 | orchestrator | 2025-07-06 20:44:09 - testbed-node-1 2025-07-06 20:44:09.337029 | orchestrator | 2025-07-06 20:44:09 - clean up keypairs 2025-07-06 20:44:09.359157 | orchestrator | 2025-07-06 20:44:09 - testbed 2025-07-06 20:44:09.387627 | orchestrator | 2025-07-06 20:44:09 - wait for servers to be gone 2025-07-06 20:44:20.712903 | orchestrator | 2025-07-06 20:44:20 - clean up ports 2025-07-06 20:44:20.909770 | orchestrator | 2025-07-06 20:44:20 - 2db0ef6b-5ca5-4ab9-9dba-f779228e3459 2025-07-06 20:44:21.397712 | orchestrator | 2025-07-06 20:44:21 - 78263ed8-02b9-4128-b1a9-8d36ecf4bbf5 2025-07-06 20:44:21.670555 | orchestrator | 2025-07-06 20:44:21 - 975dcfb3-98a8-4120-b64a-ca4915ce3701 2025-07-06 20:44:21.870761 | orchestrator | 2025-07-06 20:44:21 - b16af999-2ab0-466c-b4b2-e55c878b7f8c 2025-07-06 20:44:22.070089 | orchestrator | 2025-07-06 20:44:22 - d6d3588e-a7a3-4a1c-b43b-51d528a00e47 2025-07-06 20:44:22.276212 | orchestrator | 2025-07-06 20:44:22 - da17516d-7a6e-4a6f-8410-a7829245930c 2025-07-06 20:44:22.476122 | orchestrator | 2025-07-06 20:44:22 - e9e18d0e-964d-4fb2-aafd-d78fa251c8cd 2025-07-06 20:44:22.716656 | orchestrator | 2025-07-06 20:44:22 - clean up volumes 2025-07-06 20:44:22.847383 | orchestrator | 2025-07-06 20:44:22 - testbed-volume-1-node-base 2025-07-06 20:44:22.891886 | orchestrator | 2025-07-06 20:44:22 - testbed-volume-5-node-base 2025-07-06 20:44:22.939966 | orchestrator | 2025-07-06 20:44:22 - testbed-volume-manager-base 2025-07-06 20:44:22.983563 | orchestrator | 2025-07-06 20:44:22 - testbed-volume-0-node-base 2025-07-06 20:44:23.028533 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-2-node-base 2025-07-06 20:44:23.075882 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-4-node-base 2025-07-06 20:44:23.123714 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-3-node-base 2025-07-06 20:44:23.165182 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-2-node-5 2025-07-06 20:44:23.208224 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-8-node-5 2025-07-06 20:44:23.249635 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-3-node-3 2025-07-06 20:44:23.293560 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-1-node-4 2025-07-06 20:44:23.340150 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-7-node-4 2025-07-06 20:44:23.380608 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-6-node-3 2025-07-06 20:44:23.421836 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-5-node-5 2025-07-06 20:44:23.465633 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-4-node-4 2025-07-06 20:44:23.508005 | orchestrator | 2025-07-06 20:44:23 - testbed-volume-0-node-3 2025-07-06 20:44:23.554744 | orchestrator | 2025-07-06 20:44:23 - disconnect routers 2025-07-06 20:44:23.660485 | orchestrator | 2025-07-06 20:44:23 - testbed 2025-07-06 20:44:25.041493 | orchestrator | 2025-07-06 20:44:25 - clean up subnets 2025-07-06 20:44:25.079598 | orchestrator | 2025-07-06 20:44:25 - subnet-testbed-management 2025-07-06 20:44:25.238100 | orchestrator | 2025-07-06 20:44:25 - clean up networks 2025-07-06 20:44:25.377808 | orchestrator | 2025-07-06 20:44:25 - net-testbed-management 2025-07-06 20:44:25.657457 | orchestrator | 2025-07-06 20:44:25 - clean up security groups 2025-07-06 20:44:25.700036 | orchestrator | 2025-07-06 20:44:25 - testbed-management 2025-07-06 20:44:25.849720 | orchestrator | 2025-07-06 20:44:25 - testbed-node 2025-07-06 20:44:25.956442 | orchestrator | 2025-07-06 20:44:25 - clean up floating ips 2025-07-06 20:44:25.985845 | orchestrator | 2025-07-06 20:44:25 - 81.163.193.103 2025-07-06 20:44:26.345625 | orchestrator | 2025-07-06 20:44:26 - clean up routers 2025-07-06 20:44:26.451442 | orchestrator | 2025-07-06 20:44:26 - testbed 2025-07-06 20:44:27.392256 | orchestrator | ok: Runtime: 0:00:19.611101 2025-07-06 20:44:27.397518 | 2025-07-06 20:44:27.397964 | PLAY RECAP 2025-07-06 20:44:27.398127 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-06 20:44:27.398190 | 2025-07-06 20:44:27.548713 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-06 20:44:27.549807 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-06 20:44:28.297977 | 2025-07-06 20:44:28.298141 | PLAY [Cleanup play] 2025-07-06 20:44:28.314348 | 2025-07-06 20:44:28.314493 | TASK [Set cloud fact (Zuul deployment)] 2025-07-06 20:44:28.365498 | orchestrator | ok 2025-07-06 20:44:28.372807 | 2025-07-06 20:44:28.372948 | TASK [Set cloud fact (local deployment)] 2025-07-06 20:44:28.408031 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:28.429576 | 2025-07-06 20:44:28.429745 | TASK [Clean the cloud environment] 2025-07-06 20:44:29.579724 | orchestrator | 2025-07-06 20:44:29 - clean up servers 2025-07-06 20:44:30.055719 | orchestrator | 2025-07-06 20:44:30 - clean up keypairs 2025-07-06 20:44:30.075728 | orchestrator | 2025-07-06 20:44:30 - wait for servers to be gone 2025-07-06 20:44:30.116735 | orchestrator | 2025-07-06 20:44:30 - clean up ports 2025-07-06 20:44:30.204897 | orchestrator | 2025-07-06 20:44:30 - clean up volumes 2025-07-06 20:44:30.275142 | orchestrator | 2025-07-06 20:44:30 - disconnect routers 2025-07-06 20:44:30.299329 | orchestrator | 2025-07-06 20:44:30 - clean up subnets 2025-07-06 20:44:30.319551 | orchestrator | 2025-07-06 20:44:30 - clean up networks 2025-07-06 20:44:30.473163 | orchestrator | 2025-07-06 20:44:30 - clean up security groups 2025-07-06 20:44:30.509370 | orchestrator | 2025-07-06 20:44:30 - clean up floating ips 2025-07-06 20:44:30.539248 | orchestrator | 2025-07-06 20:44:30 - clean up routers 2025-07-06 20:44:30.967754 | orchestrator | ok: Runtime: 0:00:01.362299 2025-07-06 20:44:30.972149 | 2025-07-06 20:44:30.972359 | PLAY RECAP 2025-07-06 20:44:30.972592 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-06 20:44:30.972666 | 2025-07-06 20:44:31.128032 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-06 20:44:31.129069 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-06 20:44:31.873205 | 2025-07-06 20:44:31.873427 | PLAY [Base post-fetch] 2025-07-06 20:44:31.890196 | 2025-07-06 20:44:31.890392 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-06 20:44:31.956498 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:31.969913 | 2025-07-06 20:44:31.970070 | TASK [fetch-output : Set log path for single node] 2025-07-06 20:44:32.030006 | orchestrator | ok 2025-07-06 20:44:32.039125 | 2025-07-06 20:44:32.039314 | LOOP [fetch-output : Ensure local output dirs] 2025-07-06 20:44:32.576633 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/work/logs" 2025-07-06 20:44:32.851943 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/work/artifacts" 2025-07-06 20:44:33.138648 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1b8c43777a9244299d5583d25d5cd521/work/docs" 2025-07-06 20:44:33.168296 | 2025-07-06 20:44:33.168663 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-06 20:44:34.129194 | orchestrator | changed: .d..t...... ./ 2025-07-06 20:44:34.129651 | orchestrator | changed: All items complete 2025-07-06 20:44:34.129712 | 2025-07-06 20:44:34.861521 | orchestrator | changed: .d..t...... ./ 2025-07-06 20:44:35.587949 | orchestrator | changed: .d..t...... ./ 2025-07-06 20:44:35.618926 | 2025-07-06 20:44:35.619085 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-06 20:44:35.667583 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:35.676618 | orchestrator | skipping: Conditional result was False 2025-07-06 20:44:35.702540 | 2025-07-06 20:44:35.702683 | PLAY RECAP 2025-07-06 20:44:35.702783 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-06 20:44:35.702893 | 2025-07-06 20:44:35.837158 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-06 20:44:35.839798 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-06 20:44:36.615395 | 2025-07-06 20:44:36.615560 | PLAY [Base post] 2025-07-06 20:44:36.630093 | 2025-07-06 20:44:36.630285 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-06 20:44:37.687117 | orchestrator | changed 2025-07-06 20:44:37.698130 | 2025-07-06 20:44:37.698275 | PLAY RECAP 2025-07-06 20:44:37.698398 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-06 20:44:37.698478 | 2025-07-06 20:44:37.829713 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-06 20:44:37.832149 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-06 20:44:38.656744 | 2025-07-06 20:44:38.656922 | PLAY [Base post-logs] 2025-07-06 20:44:38.667946 | 2025-07-06 20:44:38.668105 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-06 20:44:39.152723 | localhost | changed 2025-07-06 20:44:39.172300 | 2025-07-06 20:44:39.172528 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-06 20:44:39.199095 | localhost | ok 2025-07-06 20:44:39.203104 | 2025-07-06 20:44:39.203228 | TASK [Set zuul-log-path fact] 2025-07-06 20:44:39.229835 | localhost | ok 2025-07-06 20:44:39.243722 | 2025-07-06 20:44:39.243864 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-06 20:44:39.272047 | localhost | ok 2025-07-06 20:44:39.279704 | 2025-07-06 20:44:39.279885 | TASK [upload-logs : Create log directories] 2025-07-06 20:44:39.803390 | localhost | changed 2025-07-06 20:44:39.807891 | 2025-07-06 20:44:39.808041 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-06 20:44:40.315743 | localhost -> localhost | ok: Runtime: 0:00:00.007292 2025-07-06 20:44:40.325353 | 2025-07-06 20:44:40.325570 | TASK [upload-logs : Upload logs to log server] 2025-07-06 20:44:40.935336 | localhost | Output suppressed because no_log was given 2025-07-06 20:44:40.938734 | 2025-07-06 20:44:40.938928 | LOOP [upload-logs : Compress console log and json output] 2025-07-06 20:44:40.992283 | localhost | skipping: Conditional result was False 2025-07-06 20:44:40.998078 | localhost | skipping: Conditional result was False 2025-07-06 20:44:41.012007 | 2025-07-06 20:44:41.012277 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-06 20:44:41.079531 | localhost | skipping: Conditional result was False 2025-07-06 20:44:41.080175 | 2025-07-06 20:44:41.083160 | localhost | skipping: Conditional result was False 2025-07-06 20:44:41.098565 | 2025-07-06 20:44:41.098765 | LOOP [upload-logs : Upload console log and json output]